CN109483887B - Online detection method for contour accuracy of forming layer in selective laser melting process - Google Patents

Online detection method for contour accuracy of forming layer in selective laser melting process Download PDF

Info

Publication number
CN109483887B
CN109483887B CN201811353936.7A CN201811353936A CN109483887B CN 109483887 B CN109483887 B CN 109483887B CN 201811353936 A CN201811353936 A CN 201811353936A CN 109483887 B CN109483887 B CN 109483887B
Authority
CN
China
Prior art keywords
image
contour
camera
processed
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811353936.7A
Other languages
Chinese (zh)
Other versions
CN109483887A (en
Inventor
李中伟
钟凯
何丕尧
刘行健
史玉升
魏青松
文世峰
王从军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201811353936.7A priority Critical patent/CN109483887B/en
Publication of CN109483887A publication Critical patent/CN109483887A/en
Application granted granted Critical
Publication of CN109483887B publication Critical patent/CN109483887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B22CASTING; POWDER METALLURGY
    • B22FWORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
    • B22F10/00Additive manufacturing of workpieces or articles from metallic powder
    • B22F10/20Direct sintering or melting
    • B22F10/28Powder bed fusion, e.g. selective laser melting [SLM] or electron beam melting [EBM]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B22CASTING; POWDER METALLURGY
    • B22FWORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
    • B22F10/00Additive manufacturing of workpieces or articles from metallic powder
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B22CASTING; POWDER METALLURGY
    • B22FWORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
    • B22F10/00Additive manufacturing of workpieces or articles from metallic powder
    • B22F10/30Process control
    • B22F10/38Process control to achieve specific product aspects, e.g. surface smoothness, density, porosity or hollow structures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B22CASTING; POWDER METALLURGY
    • B22FWORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
    • B22F10/00Additive manufacturing of workpieces or articles from metallic powder
    • B22F10/80Data acquisition or data processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B22CASTING; POWDER METALLURGY
    • B22FWORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
    • B22F12/00Apparatus or devices specially adapted for additive manufacturing; Auxiliary means for additive manufacturing; Combinations of additive manufacturing apparatus or devices with other processing apparatus or devices
    • B22F12/90Means for process control, e.g. cameras or sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/10Processes of additive manufacturing
    • B29C64/141Processes of additive manufacturing using only solid materials
    • B29C64/153Processes of additive manufacturing using only solid materials using layers of powder being selectively joined, e.g. by selective laser sintering or melting
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P10/00Technologies related to metal processing
    • Y02P10/25Process efficiency

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Plasma & Fusion (AREA)
  • Analytical Chemistry (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of additive manufacturing online measurement, and discloses an online detection method for profile accuracy of a forming layer in a selective laser melting process. The online detection method comprises the following steps: s1, slicing the model of the part to be processed and generating an auxiliary image; s2, spreading powder, and then selectively melting and forming the powder by laser; s3, collecting the image of the basically formed area and extracting the outline of the segmented image; s4, carrying out three-dimensional reconstruction on the image contour to obtain an actual image contour; s5, comparing the actual contour Cr of the image with the contour of the corresponding sliced layer, analyzing the precision, if the requirement is met, entering the step S6, otherwise, ending the processing; s6, detecting whether the part to be processed is processed. The method can effectively and accurately define the range of extracting the image contour, thereby greatly reducing the iteration times in the calculation process, leading the extraction precision of the image contour to reach the sub-pixel level and realizing accurate and rapid contour detection.

Description

Online detection method for contour accuracy of forming layer in selective laser melting process
Technical Field
The invention belongs to the technical field of additive manufacturing, and particularly relates to an online detection method for profile accuracy of a forming layer in a selective laser melting process.
Background
Due to the development of processes and materials of additive manufacturing technology and the deep understanding of the basic design concept, additive manufacturing technology has matured rapidly in recent years. The Powder bed Additive Manufacturing technology (AM PBF) is one of seven Additive Manufacturing technologies defined by the American Society for Testing and Materials (ASTM) and mainly includes laser selective melting (SLM), direct metal laser sintering (DMSL), laser selective sintering (SLS), and Electron Beam Melting (EBM). The technology is widely applied in the fields of aerospace, mold manufacturing and the like at present. But the lack of quality detection of the additive manufacturing parts hinders the popularization of the additive manufacturing technology, especially for high-precision product parts. The advent of in-line measurement technology has made this impediment in gradual reduction even further advances in additive manufacturing technology.
At present, the on-line measurement technology mainly aims at measuring the temperature and the shape of a molten pool, the stress strain and the geometric parameters of a forming area. In the measurement of the geometric parameters of the forming area, the measurement of the profile of the forming area for each layer is very important. Whether the contour is machined accurately will directly affect the quality of the finished part. Because of the particularity of additive manufacturing, each layer of forming area has a predefined CAD slice shape during processing, after a layer is processed, the outline of the layer of forming area can be extracted to be compared with the CAD slice shape, the processing quality of the layer can be judged rapidly, and the influence of process parameters on the layer quality can be analyzed. Meanwhile, the extraction of the outline is helpful for researching the relationship among the process parameters, the forming characteristics and the part quality. However, the machining process of additive manufacturing is quite complex, and the influences of powder splashing, temperature, vibration and the like exist; the acquired image of the shaping area has a lot of noises and false contours, and the contrast of the shaping area and the background is small, so that the distinguishing difficulty is high. The research on the contour extraction method of the forming area is still very short. Cooke et al use a high resolution CCD camera to observe the shape of the forming region, extract the contour with the zero crossings of the second derivative of the specified pixel line gray scale curve, and improve the powder bed fusion additive manufacturing process based on analyzing the process parameters and the contour of the forming region. Witt et al developed a high resolution CCD camera with tilt and shift lenses for viewing working images of a commercial powder bed fusion machine and using image analysis to detect forming defects on the work.
The various online measurement techniques described above play a great role in the research of powder bed melting additive manufacturing. However, these methods directly extract the contour of the acquired image, are easily interfered by the working environment and noise, and cannot realize accurate and fast contour detection. Therefore, when the extracted contour is compared with the CAD slice in a picture and the area quality analysis is performed at a later stage, the influence on the result parameters and the processing parameters of the processing is very large. In summary, in the field of powder bed melting additive manufacturing, it is urgently needed to develop an online measuring method capable of detecting part defects and realizing accurate and rapid contour detection, so as to monitor the processing process of powder bed melting additive manufacturing in real time and perform preliminary inspection on the quality of parts.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides an online detection method for contour accuracy of a shaping layer in a selective laser melting process, which aims to divide an acquired image to certain extent based on a binocular stereo vision principle to establish an interesting region required by contour extraction, perform secondary accurate extraction as an initial contour of the contour extraction of the shaping region, perform three-dimensional reconstruction based on the contour extracted twice to serve as an object of model slice contrast analysis, thereby effectively and accurately demarcating the range of the image contour extraction, avoiding the interference of the environment and noise of direct extraction, further greatly reducing the iteration times in the calculation process, enabling the extraction accuracy of the image contour to reach a sub-pixel level, and realizing accurate and rapid contour detection.
In order to achieve the above object, the present invention provides an online detection method for profile accuracy of a forming layer in a selective laser melting process, comprising the steps of:
s1, collecting a substrate image by using a binocular camera, calculating the relative position relation between a camera coordinate system and a forming coordinate system, slicing the model of the part to be processed to obtain a plurality of sliced layers, and generating an auxiliary image of each sliced layer;
s2, laser scanning on the substrate to form a processing layer of the part to be processed;
s3, collecting the image of the processing layer on the substrate by using the binocular camera, and performing primary extraction by combining the auxiliary image of the processing layer corresponding to the slicing layer to obtain an initial contour area of the part to be processed;
s4, carrying out second contour extraction on the initial contour region, carrying out three-dimensional reconstruction on the image contour extracted for the second time, and obtaining the actual contour C of the image under the forming coordinate systemr
S5 is used for determining the actual contour C of the imagerProjecting the two-dimensional projection profile on a plane parallel to the substrate to obtain a two-dimensional projection profile, comparing the two-dimensional projection profile with the profile of a corresponding slice layer of the part model to be processed, analyzing the profile precision of the processing layer, if the requirement is met, entering the step S6, and if the requirement is not met, finishing the processing;
s6, detecting whether the part to be processed is processed or not, if so, ending the process; if not, the process returns to step S2 until the entire part machining is finished.
Further, the generation of the auxiliary image in step S1 includes the following steps:
s21, the image of the sliced layer is subject to image expansion operation to obtain the original image M0
S22 according to the original image M0Calculating an auxiliary image M of a first camera of the binocular cameras1Auxiliary image M of the second camera2Wherein, in the step (A),
the auxiliary image M1And an auxiliary image M2The calculation formula of (2) is as follows:
Figure GDA0002260921560000031
wherein A is1、A2Internal parameters of the first camera and the second camera respectively; r1、T1And R2、T2Respectively, extrinsic parameters of the first camera and the second camera.
Further, the extracting of the initial contour region in step S3 includes the following steps:
s31 extracting the auxiliary images M respectively1And auxiliary map M2The profile of (a);
s32 auxiliary image M1Is projected to an image P of the processing layer acquired by the first camera1The above step (1); will assist the image M2Is projected to an image P of the processing layer acquired by the second camera2The above step (1);
s33 removing the image P1Go up the image outside the area included by the projection line, and then obtain the initial contour area
Figure GDA0002260921560000041
Removing the image P1Go up the image outside the area included by the projection line, and then obtain the initial contour area
Figure GDA0002260921560000042
Further, the second contour extraction in step S4 includes the following steps:
s41 extracting the initial contour regionGray value I of all pixels in1Extracting the initial contour region
Figure GDA0002260921560000044
Gray value I of all pixels in2
S42 dividing the gray value I1Carrying out iterative operation on the energy functional brought into the Mumford-Shah model so as to obtain the image contour C extracted for the second time1
S43 dividing the gray value I2Carrying out iterative operation on the energy functional brought into the Mumford-Shah model so as to obtain the image contour C extracted for the second time2
Further, the energy functional of the Mumford-Shah model is as follows:
where I represents the gray value, Ω is the initial contour region, C is the second extracted image contour, u is the piecewise smooth function,
Figure GDA0002260921560000046
for the smoothing term, v | C | is a length regularization term.
Further, the three-dimensional reconstruction in step S4 includes the following steps:
s51 image contour C extracted at the second time1To select a certain contour point p1Obtain the point p1Image contour C extracted at the second time2Of (2) corresponding point p2
S52 obtaining three-dimensional profile data P through least square reconstructionw
Figure GDA0002260921560000047
Wherein the content of the first and second substances,
Figure GDA0002260921560000048
are each p1、p2The homogeneous coordinate of (a) is,
Figure GDA0002260921560000049
is PWOf homogeneous coordinates of, s1And s2Is a scaling factor.
S53 repeats steps S51 and S52 until all three-dimensional contour data are acquired;
s54, all three-dimensional contour data are combined to obtain an actual contour C under a forming coordinate systemr
Generally, compared with the prior art, the above technical solution conceived by the present invention mainly has the following technical advantages:
1. the line detection method provided by the invention has the advantages that the acquired image is divided to a certain extent by adopting a binocular stereo vision principle to establish the region of interest required by contour extraction, the region of interest is used as the initial contour of the primary extraction of the contour of the forming region, the secondary accurate extraction is further carried out on the contour of the extracted forming region, and the three-dimensional reconstruction is carried out on the basis of the contour of the secondary extraction to be used as the object of model slice comparative analysis, so that the range of the image contour extraction is effectively and accurately defined, the interference of the environment and noise of direct extraction is avoided, the iteration times in the calculation process are greatly reduced, the extraction precision of the image contour reaches the sub-pixel level, and the accurate and rapid contour detection is realized.
2. According to the line detection method, images of a basic forming area are collected by using a binocular stereo vision principle, image segmentation is carried out by combining an auxiliary image to serve as an interested area for later-stage contour extraction, further, the purpose of regional image contour extraction is achieved, then iterative calculation is carried out on an initial area by using an energy functional of a Mumford-Shah model, the initial area is enabled to gradually approach the forming area, the contour of the forming area can be obtained after about 20 times of iteration, the interference of the environment and noise of direct extraction is avoided, and the extraction precision and speed of the image contour are greatly improved.
3. The online detection method can realize rapid and accurate extraction of the contour of each layer of forming area during processing, and the extraction precision can reach the sub-pixel level.
4. The online detection method of the invention can well detect the parts with special structures inside by measuring the profile tolerance, and when the defects of the parts are detected, the defects can be timely fed back to the selective laser melting equipment, and the defects can be prevented from being generated continuously by feeding back and adjusting the parameters of the machine.
Drawings
FIG. 1 is a flowchart of a method for online detection of profile accuracy of a forming layer during selective laser melting according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for online detection of profile accuracy of a shaping layer during selective laser melting for shaping coordinate system registration of a substrate ablation pattern in accordance with an embodiment of the present invention;
FIGS. 3(a) - (d) are diagrams illustrating an example of profile extraction of the method for online detection of profile accuracy of a forming layer during selective laser melting according to an embodiment of the present invention.
Fig. 4 is a profile accuracy detection picture provided by the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a specific flowchart of an embodiment of the present invention, which specifically includes the following steps:
step 1, registering a camera calibration and a forming coordinate system. A pattern is machined on the substrate of the selective laser melting device as shown in fig. 2, which is a typical pattern for calibration. The substrate image is then captured with two cameras. Using calibration software to extract the pixel coordinates of the center points in the substrate image, then sorting the center points, performing three-dimensional reconstruction on the sorted center point array, and finally solving the internal parameter A of the first camera through calculation1External parameter R1,T1(ii) a Second camera and internal parameters A2External parameter R2,T2And the relative position relationship between the camera coordinate system and the forming coordinate system;
step 2, slicing the model of the part to be processed to obtain a plurality of sliced layers and generating an auxiliary image of each sliced layer; that is, after the registration of the camera calibration with the coordinate system is completed, the auxiliary image is generated by using the slice image of the model of the part to be machined, and the specific operation is as follows:
first, according to the characteristics of additive manufacturing, a part model is decomposed into slice images layer by layer. For all slice images, the scheme is realized by image expansionOperation generating an original image M from slice images0. Then the auxiliary images M of the first and second cameras1And M2Can be derived from the original image M0Respectively calculating to obtain:
Figure GDA0002260921560000061
each slice image will generate a corresponding auxiliary image M1,M2The auxiliary images can play a good initialization role when the outline of the layer forming area is extracted;
step 3, forming a processing layer of the part to be processed by laser scanning on the substrate under a forming coordinate system according to the image of the slicing layer; that is, the powder scraping device delivers the powder onto the forming substrate and the high-energy laser begins to selectively melt the powder according to the image of the cut sheet provided by the computer.
And 4, acquiring the image of the processing layer on the substrate by using a binocular camera, performing primary extraction by combining with the auxiliary image of the corresponding sliced layer to obtain an initial contour region of the part to be processed, performing secondary contour extraction on the initial contour region, performing three-dimensional reconstruction on the image contour extracted for the second time to obtain an actual contour C of the image under a forming coordinate systemr(ii) a The specific operation is as follows:
the two cameras collect images of a forming area on the substrate, the auxiliary images are utilized, the scheme carries out image segmentation on the images collected by the cameras, and an improved level set method is used for carrying out contour extraction on the segmented images. FIG. 3(a) shows an auxiliary image M generated from an original image1(ii) a FIG. 3(b) shows an image P captured by the first camera1(ii) a Initial region Ω of FIG. 3(c)M1Is the outline thin line in the figure and the area enclosed by the outline thin line, and is the auxiliary image M used in the scheme1For image P1The result of the image segmentation processing performed; FIG. 3(d) shows the contour of the extracted shaping region, and the outer contour is the contour C of the shaping region extracted after about 20 iterations by the level set method1. The specific operation is as follows:
first using an auxiliary image M1For image P1Performing image segmentation processing to respectively extract the auxiliary images M1And auxiliary map M2The profile of (a); will assist the image M1Is projected to an image P of the processing layer acquired by the first camera1The above step (1); will assist the image M2Is projected to an image P of the processing layer acquired by the second camera2The above step (1); s33 removing the image P1Go up the image outside the area included by the projection line, and then obtain the initial contour area
Figure GDA0002260921560000071
Removing the image P1Go up the image outside the area included by the projection line, and then obtain the initial contour areaBy means of image segmentation, the scheme reduces the extraction range of the contour of the forming region
Figure GDA0002260921560000073
And
Figure GDA0002260921560000074
therefore, the interference of a plurality of false contours and noise can be reduced, and the speed and the precision of contour extraction are greatly improved. Then will be
Figure GDA0002260921560000075
And
Figure GDA0002260921560000076
setting the initial region, then performing iterative computation on the initial region by using an active contour model which is a Mumford-Shah model (an M-S model of a piecewise constant level set method), enabling the initial region to gradually approach the forming region, and obtaining the image contour extracted for the second time after iterating for about 20 times. First extracting the1Initial contour regionGray value I of all pixels in1Extracting the initial contour region
Figure GDA0002260921560000081
Gray value I of all pixels in2(ii) a Then the gray value I1Carrying out iterative operation on the energy functional brought into the Mumford-Shah model so as to obtain the image contour C extracted for the second time1(ii) a The gray value I is measured2Carrying out iterative operation on the energy functional brought into the Mumford-Shah model so as to obtain the image contour C extracted for the second time2
The Mumford-Shah model energy functional expression is as follows:
Figure GDA0002260921560000082
where I represents the gray scale value of the image captured by the binocular camera and Ω is the initial contour region, i.e., here
Figure GDA0002260921560000083
And
Figure GDA0002260921560000084
u is a piecewise smooth function and v is a positive constant. First term in function
Figure GDA0002260921560000085
Is a fidelity term that forces u to maximally approach image I; second item
Figure GDA0002260921560000086
The method is a smooth term and can ensure that u evolves to the image contour of the second extraction of the forming area as smoothly as possible; the third term v | C | is a length regularization term that ensures that the initial contour evolution is as close as possible to the second extracted image contour.
And 5, performing three-dimensional reconstruction on the image contour extracted for the second time to obtain an actual image contour C under a forming coordinate systemrThe actual contour C of the image is obtainedrProjected parallel to the baseObtaining a two-dimensional projection profile on the plane of the plate, comparing the two-dimensional projection profile with the profile of a corresponding slice layer of the part model to be processed, analyzing the profile precision of the processing layer, if the requirement is met, entering a step S6, otherwise, finishing the processing; i.e. the contour C of the shaped area extracted by the first camera1Profile C of the shaped area extracted by the second camera2Performing three-dimensional reconstruction to obtain an actual contour C under a forming coordinate systemr(ii) a The specific operation is as follows:
at the second extracted image contour C1To select a certain contour point p1The point p1Image contour C extracted at the second time2Is p2(ii) a Obtaining three-dimensional profile data P through least square reconstructionw(ii) a All three-dimensional profile data are combined to obtain an actual profile C under a forming coordinate systemr. The calculation formula of the least square method is as follows:
Figure GDA0002260921560000087
in the equation, the ratio of the total of the components,
Figure GDA0002260921560000091
are each p1、p2The homogeneous coordinate of (a) is,
Figure GDA0002260921560000092
is PWOf homogeneous coordinates of, s1And s2Is the proportionality coefficient, A1,A2,R1,T1,R2,T2Which has been previously determined by calibration.
Step 6, describing the actual contour C of the imagerProjecting the two-dimensional projection profile on a plane parallel to the substrate to obtain a two-dimensional projection profile, comparing the two-dimensional projection profile with the profile of a corresponding slice layer of the part model to be processed, analyzing the profile precision of the processing layer, if the requirement is met, entering the step S6, and if the requirement is not met, finishing the processing;
7, detecting whether the part to be processed is processed, and if so, finishing the processing; if not, returning to the step 3 until the whole part is processed.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. An online detection method for the contour accuracy of a forming layer in a selective laser melting process is characterized by comprising the following steps:
s1, collecting a substrate image by using a binocular camera, calculating the relative position relation between a camera coordinate system and a forming coordinate system, slicing the model of the part to be processed to obtain a plurality of sliced layers, and generating an auxiliary image of each sliced layer;
s2, laser scanning on the substrate to form a processing layer of the part to be processed;
s3, collecting the image of the processing layer on the substrate by using the binocular camera, and performing primary extraction by combining the auxiliary image of the processing layer corresponding to the slicing layer to obtain an initial contour area of the part to be processed;
s4, carrying out second contour extraction on the initial contour region, carrying out three-dimensional reconstruction on the image contour extracted for the second time, and obtaining the actual contour C of the image under the forming coordinate systemr
S5 is used for determining the actual contour C of the imagerProjecting the two-dimensional projection profile on a plane parallel to the substrate to obtain a two-dimensional projection profile, comparing the two-dimensional projection profile with the profile of a corresponding slice layer of the part model to be processed, analyzing the profile precision of the processing layer, if the requirement is met, entering the step S6, and if the requirement is not met, finishing the processing;
s6, detecting whether the part to be processed is processed or not, if so, ending the process; if not, the process returns to step S2 until the entire part machining is finished.
2. A detection method according to claim 1, wherein the generation of the auxiliary image in step S1 includes the following steps:
s21, the image of the sliced layer is subject to image expansion operation to obtain the original image M0
S22 according to the original image M0Calculating an auxiliary image M of a first camera of the binocular cameras1Auxiliary image M of the second camera2Wherein, in the step (A),
the auxiliary image M1And auxiliary map M2The calculation formula of (2) is as follows:
wherein A is1、A2Internal parameters of the first camera and the second camera respectively; r1、T1And R2、T2Respectively, extrinsic parameters of the first camera and the second camera.
3. A detection method according to claim 2, wherein the extraction of the initial contour region in step S3 includes the following steps:
s31 extracting the auxiliary images M respectively1And an auxiliary image M2The profile of (a);
s32 auxiliary image M1Is projected to an image P of the processing layer acquired by the first camera1The above step (1); will assist the image M2Is projected to an image P of the processing layer acquired by the second camera2The above step (1);
s33 removing the image P1Go up the image outside the area included by the projection line, and then obtain the initial contour area
Figure FDA0002260921550000021
Removing the image P1Go up the image outside the area included by the projection line, and then obtain the initial contour area
Figure FDA0002260921550000022
4. A detection method according to claim 3, wherein the second contour extraction in step S4 includes the following steps:
s41 extracting the initial contour region
Figure FDA0002260921550000023
Gray value I of all pixels in1Extracting the initial contour region
Figure FDA0002260921550000024
Gray value I of all pixels in2
S42 dividing the gray value I1Carrying out iterative operation on the energy functional brought into the Mumford-Shah model so as to obtain the image contour C extracted for the second time1
S43 dividing the gray value I2Carrying out iterative operation on the energy functional brought into the Mumford-Shah model so as to obtain the image contour C extracted for the second time2
5. A detection method according to claim 4, wherein the energy functional of the Mumford-Shah model is:
Figure FDA0002260921550000025
where I represents the gray value, Ω is the initial contour region, C is the second extracted image contour, u is the piecewise smooth function,
Figure FDA0002260921550000026
for the smoothing term, v | C | is a length regularization term.
6. The detection method according to claim 4, wherein the three-dimensional reconstruction in step S4 includes the following steps:
s51 image contour C extracted at the second time1To select a certain contour point p1Obtain the point p1Image contour C extracted at the second time2Of (2) corresponding point p2
S52 obtaining three-dimensional profile data P through least square reconstructionw
Figure FDA0002260921550000031
Wherein the content of the first and second substances,
Figure FDA0002260921550000032
are each p1、p2The homogeneous coordinate of (a) is,
Figure FDA0002260921550000033
is PWOf homogeneous coordinates of, s1And s2Is a proportionality coefficient;
s53 repeats steps S51 and S52 until all three-dimensional contour data are acquired;
s54, all three-dimensional contour data are combined to obtain an actual contour C under a forming coordinate systemr
CN201811353936.7A 2018-11-14 2018-11-14 Online detection method for contour accuracy of forming layer in selective laser melting process Active CN109483887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811353936.7A CN109483887B (en) 2018-11-14 2018-11-14 Online detection method for contour accuracy of forming layer in selective laser melting process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811353936.7A CN109483887B (en) 2018-11-14 2018-11-14 Online detection method for contour accuracy of forming layer in selective laser melting process

Publications (2)

Publication Number Publication Date
CN109483887A CN109483887A (en) 2019-03-19
CN109483887B true CN109483887B (en) 2020-02-21

Family

ID=65695920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811353936.7A Active CN109483887B (en) 2018-11-14 2018-11-14 Online detection method for contour accuracy of forming layer in selective laser melting process

Country Status (1)

Country Link
CN (1) CN109483887B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435271B (en) * 2020-12-02 2023-06-20 电子科技大学中山学院 Firing contour segmentation method applied to laser spot quality measurement
CN114101707B (en) * 2021-11-22 2022-12-09 南昌大学 Laser additive manufacturing power control method, system, medium, and electronic device
CN114932289B (en) * 2022-06-04 2024-04-23 南京理工大学 Device and method for controlling dimension precision of material increase of large-sized component
CN116012330B (en) * 2022-12-28 2023-10-20 广州市易鸿智能装备有限公司 Pole piece defect detection method, device, equipment and computer storage medium
CN115841484B (en) * 2022-12-30 2023-04-25 武汉誉城千里建工有限公司 Steel structure welding quality detection system based on three-dimensional laser scanning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600690A (en) * 2016-12-30 2017-04-26 厦门理工学院 Complex building three-dimensional modeling method based on point cloud data
CN107727011A (en) * 2017-09-14 2018-02-23 华中科技大学 Selective laser melting manufacturing process midplane degree and profile tolerance On-line Measuring Method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180036964A1 (en) * 2016-08-08 2018-02-08 General Electric Company Method and system for inspection of additive manufactured parts
CN108381912B (en) * 2017-12-11 2020-05-05 中国科学院光电研究院 3D prints monitoring system based on laser-induced plasma spectrum

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600690A (en) * 2016-12-30 2017-04-26 厦门理工学院 Complex building three-dimensional modeling method based on point cloud data
CN107727011A (en) * 2017-09-14 2018-02-23 华中科技大学 Selective laser melting manufacturing process midplane degree and profile tolerance On-line Measuring Method

Also Published As

Publication number Publication date
CN109483887A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109483887B (en) Online detection method for contour accuracy of forming layer in selective laser melting process
CN107727011B (en) Method for measuring flatness and profile on line in selective laser melting manufacturing process
CN112614098B (en) Blank positioning and machining allowance analysis method based on augmented reality
CN109085178B (en) Defect fingerprint accurate online monitoring and feedback method for additive manufacturing
CN115482195B (en) Train part deformation detection method based on three-dimensional point cloud
CN115096206B (en) High-precision part size measurement method based on machine vision
CN114279357A (en) Die casting burr size measurement method and system based on machine vision
Fox et al. Complementary use of optical metrology and x-ray computed tomography for surface finish and defect detection in laser powder bed fusion additive manufacturing
US7893947B2 (en) Method for extracting edge in photogrammetry with subpixel accuracy
CN115546125A (en) Method for error detection and track deviation correction of additive manufacturing cladding layer based on point cloud information
CN117474873B (en) Surface treatment system before brazing of high-chromium wear-resistant castings
CN116817796B (en) Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses
Shen et al. Surface extraction from micro-computed tomography data for additive manufacturing
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN114663882B (en) Electric automobile chassis scratch three-dimensional detection method based on deep learning
CN115115578B (en) Defect detection method and system in additive manufacturing process
TW200949472A (en) On-board two-dimension contour detection method and system
CN115797414A (en) Complex curved surface measurement point cloud data registration method considering measuring head radius
Dou et al. An adaptive method of measuring the rake face wear of end mills based on image feature point set registration
Zhang et al. In-situ 3D contour measurement for laser powder bed fusion based on phase guidance
CN110021027B (en) Edge cutting point calculation method based on binocular vision
Zou et al. Laser-based precise measurement of tailor welded blanks: a case study
Zhu et al. Online tool wear condition monitoring using binocular vision
Belhaoua et al. Estimation of 3D reconstruction errors in a stereo-vision system
CN113469988A (en) Defect identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant