WO2021228181A1 - 一种3d打印方法和装置 - Google Patents

一种3d打印方法和装置 Download PDF

Info

Publication number
WO2021228181A1
WO2021228181A1 PCT/CN2021/093520 CN2021093520W WO2021228181A1 WO 2021228181 A1 WO2021228181 A1 WO 2021228181A1 CN 2021093520 W CN2021093520 W CN 2021093520W WO 2021228181 A1 WO2021228181 A1 WO 2021228181A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
nozzle
printing
frame
position information
Prior art date
Application number
PCT/CN2021/093520
Other languages
English (en)
French (fr)
Inventor
李俊
高银
谢银辉
唐康来
Original Assignee
中国科学院福建物质结构研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2020/090093 external-priority patent/WO2021226891A1/zh
Application filed by 中国科学院福建物质结构研究所 filed Critical 中国科学院福建物质结构研究所
Publication of WO2021228181A1 publication Critical patent/WO2021228181A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • B29C64/393Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • B33Y50/02Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present invention relates to the field of printing technology, in particular to a 3D printing method and device.
  • 3D printing has been used more and more widely. For example, some diseases and traffic accidents have caused serious damage to human bones. Influence. For bone injury problems, especially the repair of large bone defects, relying on autologous tissue transplantation, allogeneic tissue transplantation or using alternative materials for repair has major drawbacks, such as the need for two operations, the limitation of bone removal, and the possibility of transmission Disadvantages such as disease and low osteogenic activity. Artificial bone has the advantages of replacing traditional autologous or allogeneic bone and avoiding secondary trauma to patients, and has become a research hotspot in artificial bone scaffold materials and preparation. Since 3D printing technology can control the pore size, porosity, connectivity, and specific surface area of the scaffold, it can also realize individual customization of the scaffold. Therefore, 3D printing is increasingly applied to the preparation of artificial bone scaffold materials.
  • the purpose of the present invention is to provide a 3D printing method and device to effectively improve the accuracy of 3D printing.
  • the present invention provides the following technical solutions:
  • a 3D printing method including:
  • At least two image acquisition devices corresponding to the position information of the N image acquisition devices are started to perform the robot arm Continuous shooting; where N is a positive integer not less than 2;
  • the compensation for 3D printing is performed.
  • the optimization of the quality of each frame image collected by each image acquisition device currently activated by a preset pre-optimization algorithm includes:
  • the contour extraction is performed on the image after the smoothing process, and an image whose quality has been optimized by the first pre-optimization algorithm based on color segmentation is obtained.
  • the preset pre-optimization algorithm includes a first pre-optimization algorithm based on color segmentation and a second pre-optimization algorithm based on fast multi-exposure fusion, and the first pre-optimization algorithm based on color segmentation is the default pre-optimization algorithm;
  • the 3D printing method further includes:
  • the first pre-optimization algorithm based on color segmentation is turned off during the current 3D printing process, and the second pre-optimization algorithm based on fast multi-exposure fusion is turned on;
  • the quality of each frame image collected by each image acquisition device currently activated is optimized, including:
  • the frame image to be optimized for quality is fused based on the fusion weight, and an image that has been quality optimized by the second pre-optimization algorithm based on fast multi-exposure fusion is obtained.
  • the determining the position information of the end of the nozzle in the nozzle area of each frame of image includes:
  • the end of the nozzle in the edge image of the end of the nozzle of each frame of image is determined, and the position information of the end of the nozzle is obtained.
  • the cumulative threshold of the Hough line detection algorithm is set to 30, the threshold between two straight lines is set to 10, and the number of fitting lines is set to 3 at most;
  • the determination of the nozzle end in the nozzle end image of each frame of image through the Hough line detection algorithm includes:
  • the intersection of the Hough straight line at the end of the two nozzles is taken as the coordinate point of the end of the nozzle;
  • the Hough line at the end of the nozzle is determined by the Hough straight line detection algorithm to exceed two, the lowest point of the outer contour of the nozzle and the direction of the end of the robot arm are used to determine whether the printing is over;
  • the 3D printing compensation is performed based on the determined three-dimensional coordinates of the end of the nozzle at each time and the coordinate input value of the end of the nozzle at each time, including:
  • the 3D printing compensation for the error at this moment includes:
  • the trained target recognition algorithm is a CNN-based target recognition algorithm.
  • it also includes:
  • the adaptive boundary restriction method and the least square method are used to smoothly optimize each frame of image.
  • a 3D printing device includes:
  • the main body of the 3D printing device The main body of the 3D printing device
  • the controller is used to execute a computer program to implement the steps of any one of the above-mentioned 3D printing methods.
  • the compensation of 3D printing is realized based on visual feedback.
  • this application can determine the position information of the end of the nozzle in the nozzle area of each frame of image, and collect it at the same time.
  • the position information of each frame image determines the three-dimensional coordinates of the end of the nozzle at that time. Therefore, you can perform 3D printing based on the determined three-dimensional coordinates of the end of the nozzle at each time and the coordinate input value of the end of the nozzle at each time. Compensation. Due to the compensation of 3D printing, the solution of the present application can effectively improve the accuracy of 3D printing.
  • this application considers that due to the motion of the robotic arm, there may be occlusion for a single image capture device. Therefore, after starting 3D printing, this application starts N image capture based on the detected position information of the robotic arm. At least two image acquisition devices corresponding to the position information in the device perform continuous shooting of the robot arm, so as to ensure that the printing nozzle can be well photographed and it is not easy to be blocked. Further, before the identification of the nozzle area, this application uses a preset pre-optimization algorithm to optimize the quality of each frame image collected by each image acquisition device currently activated, thereby helping to ensure the subsequent identification of the end of the nozzle The accuracy of the three-dimensional coordinates is thus beneficial to further improve the accuracy of the 3D printing of the present application.
  • Figure 1 is a flow chart of the implementation of a 3D printing method in the present invention
  • FIG. 2 is a schematic structural diagram of a 3D printing device in a specific embodiment of the present invention.
  • Figure 3a, Figure 3b, Figure 3c, Figure 3d, Figure 3e in turn are schematic diagrams of nozzle area tracking position in a specific embodiment of the present invention, schematic diagram of K-means detection diagram, schematic diagram of edge image, schematic diagram of Hough straight line detection, and nozzle end Schematic diagram of positioning;
  • Fig. 4a, Fig. 4b, and Fig. 4c are schematic diagrams of different situations of obtaining the intersection point of Hough straight lines in sequence;
  • FIG. 5 is a schematic diagram of the compensation effect in a specific embodiment of the present invention.
  • 6a and 6b are respectively a schematic diagram of a hybrid printing path and a schematic diagram of adding control points in a specific embodiment of the present invention
  • FIGS. 7a and 7b are schematic diagrams of a free-form surface model and its point cloud in a specific embodiment of the present invention, respectively
  • 8a and 8b are respectively schematic diagrams of the cross-section and the overall point cloud fitting in a specific embodiment of the present invention.
  • the core of the present invention is to provide a 3D printing method, which effectively improves the accuracy of 3D printing.
  • FIG. 1 is a flowchart of an implementation of a 3D printing method in the present invention.
  • the 3D printing method may include the following steps:
  • Step S101 After receiving the input printing model and starting 3D printing, using the detected position information of the robot arm, start at least 2 image acquisition devices corresponding to the position information among the N image acquisition devices to perform the continuous operation of the robot arm Shooting; where N is a positive integer not less than 2.
  • 3D printing can be carried out to realize the control of the robotic arm.
  • This application needs to photograph the robotic arm to obtain the three-dimensional coordinates of the end of the nozzle at the end of the robotic arm. This application considers that if one image acquisition device is used for shooting, it may follow the movement of the robotic arm. The image acquisition device cannot capture the end of the print head. Therefore, after starting 3D printing, this application starts at least 2 image acquisition devices corresponding to the position information of the N image acquisition devices by using the detected position information of the robot arm to perform the robot arm. Continuous shooting.
  • each image acquisition device corresponding to the position information can be set in advance, and usually can be set to be able to effectively capture each image acquisition device at the end of the nozzle under the current position information of the robot arm.
  • the position information of the robot arm is different, the correspondingly activated image acquisition device is different.
  • 4 image acquisition devices constitute 2 sets of binocular systems.
  • several sets of binocular systems can be dynamically added as needed.
  • the binocular system has the advantages of high efficiency, appropriate accuracy, simple system structure, and low cost. It is very suitable for online, non-contact product inspection and quality control at the manufacturing site.
  • the parallel binocular system is a more effective measurement method because the image acquisition is completed in an instant.
  • the use of two sets of binocular systems can usually ensure that the end of the nozzle can be detected.
  • the image acquisition device that needs to be activated can be updated periodically.
  • update triggering methods For example, in the following implementation, the Hough line at the end of the print head determined by the Hough line detection algorithm exceeds two, and the printing is not finished. The update of the image acquisition device that needs to be started.
  • there may be more triggering methods for updating the image acquisition device that need to be started which can be set according to actual needs.
  • the image acquisition device can usually be an industrial camera, and the specific type of the robotic arm can also be set as required.
  • the specific type of the robotic arm can also be set as required.
  • Fig. 2 it is a six-axis robotic arm, and 4 industrial cameras are arranged around the robotic arm.
  • the printing platform is set in the middle of the 4 industrial cameras.
  • Step S102 Through a preset pre-optimization algorithm, the quality of each frame image collected by each image acquisition device currently activated is optimized.
  • the image quality is optimized through a preset pre-optimization algorithm, so that the accuracy of the three-dimensional coordinates of the end of the nozzle recognized subsequently is higher, which is also conducive to further Improve the accuracy of the 3D printing of this application.
  • step S102 can specifically include:
  • Step 1 For any frame of image to be optimized for quality, use the color conversion function to convert the frame of image into an HSV image;
  • Step 2 For any pixel in the HSV image converted from the frame of image, when the pixel does not meet the preset first range, remove the pixel, and smooth the resulting image;
  • Step 3 Perform contour extraction on the image after the smoothing process, and obtain an image whose quality has been optimized by the first pre-optimization algorithm based on color segmentation.
  • the image is pre-optimized based on the first pre-optimization algorithm of color segmentation.
  • the input color image is converted to the HSV (Hue Saturation Value, hue, saturation, brightness) color space through the color conversion function to obtain the HSV image.
  • HSV Human Saturation Value, hue, saturation, brightness
  • the first range is the first range formed by the respective preset ranges of H, S, and V.
  • the preset ranges of H, S, and V are as shown in Table 1.
  • the color of the print head can be red, purple, green, cyan and blue. Therefore, based on These 5 colors set the respective preset ranges of H, S, and V, and then constitute the preset first range. That is to say, in this case, for any pixel, when the H, S, and V values of the pixel meet the range of Table 1, the pixel is retained, otherwise, the pixel is removed.
  • the obtained image is smoothed. For example, median filtering can be selected for smoothing to remove single-point noise.
  • contours of the smoothed image are extracted, and the first color-based segmentation can be obtained.
  • the pre-optimization algorithm is used to optimize the quality of the image. Perform contour extraction on the smoothed image, that is, draw the circumscribed rectangle of each independent object, and remove the redundant target in the irrelevant rectangle through the aspect ratio and area.
  • the quality of the image is optimized by the first pre-optimization algorithm based on color segmentation, and only the printed and sprayed image is left in the optimized image.
  • the operation of the first pre-optimization algorithm based on color segmentation is simple and convenient, which helps reduce the time-consuming implementation of the solution.
  • the preset pre-optimization algorithm includes the first pre-optimization algorithm based on color segmentation and the second pre-optimization algorithm based on fast multi-exposure fusion, and the first pre-optimization algorithm based on color segmentation is the default pre-optimization algorithm. Optimization algorithm, that is, every time 3D printing is performed, the default is to use the first pre-optimization algorithm based on color segmentation.
  • the 3D printing method further includes:
  • the first pre-optimization algorithm based on color segmentation is turned off in this 3D printing process, and the second pre-optimization algorithm based on fast multi-exposure fusion is turned on.
  • the brightness of the quality-optimized image of each frame is continuously calculated, and the brightness average value is updated. If the updated brightness average value is lower than the preset brightness threshold, It indicates that the brightness of the obtained image is insufficient, so in this implementation manner, the second pre-optimization algorithm based on fast multi-exposure fusion is turned on.
  • the second pre-optimization algorithm based on rapid multi-exposure fusion is used to optimize the quality of each frame image collected by each image acquisition device currently activated, which may specifically include the following steps:
  • Step 1 For any frame of image to be optimized for quality, determine the color weight map of the frame image, and after converting the frame image into a grayscale image, obtain the local contrast weight and exposure weight map of the frame image;
  • Step 2 Multiply the exposure weight map and the color weight map and perform normalization processing, multiply the normalized result by the local contrast weight, and filter to obtain the fusion weight of the frame image;
  • Step 3 Fusion of the frame image to be optimized for quality based on the fusion weight, to obtain an image that has been optimized for quality through the second pre-optimization algorithm based on fast multi-exposure fusion.
  • the input image can be converted into a grayscale image first.
  • the adjacent frame image can also be used for different degrees of initial correction by gamma correction and high- and low-pass filtering. Then, the maximum brightness value of each pixel in these images can be obtained as the local contrast weight.
  • the discriminant method can be used to judge the brightness of the grayscale image.
  • the brightness threshold is 30, which is considered a reasonable brightness interval between [30, 255-30], and the interval is set as 1, and the rest are set to 0, so that the exposure weight map can be obtained.
  • the histogram equalization process is performed on the input image, and then the median filter method is used to obtain the initial color weight map, and then the expansion and corrosion operations can be used to obtain the final color weight map;
  • the exposure weight map and the color weight map are multiplied, and the two results are normalized.
  • the normalized result is multiplied by the local contrast weight to obtain the initial fusion weight, and then the recursive filtering method is used to The initial fusion weight is filtered, and the final fusion weight can be obtained;
  • the input image is fused, and the quality of the image is optimized through the second pre-optimization algorithm based on fast multi-exposure fusion.
  • Step S103 Use the trained target recognition algorithm to identify the sprinkler area of the first frame of image for each frame of images collected by any image acquisition device after quality optimization, and use the target tracking algorithm to perform sprinklers in subsequent frames of images Regional tracking.
  • the target recognition algorithm needs to be able to identify the nozzle area of the image.
  • the target recognition algorithm can be specifically a target recognition algorithm based on CNN (Convolutional Neural Networks, convolutional neural network).
  • the target recognition algorithm needs to be trained in advance. Take the CNN-based target recognition algorithm as an example. You can input multiple complicated printing models through the computer in advance, for example, input at least three printing models. For example, you can select surface transformations and more types of models, and you can select as many models as possible that include all the surface forms in the printing process. After inputting, let the printer run empty, that is, let the printer run without ventilation and without adding printing materials. Through multiple cameras set up around the printer, the video of the print nozzle in motion is collected separately and saved as each frame image until the end of printing. After that, the nozzle area in each frame of image is marked as a training sample. Finally, through the constructed CNN training network for training, the final training result can be obtained, that is, the trained target recognition algorithm can be obtained.
  • the trained target recognition algorithm can be used to identify the nozzle area of the first frame of image, and then the target tracking algorithm can be used to In the subsequent frames of images, the nozzle area is tracked. This method is more efficient than using the target recognition algorithm to identify the nozzle area in each frame of the image.
  • the specific type of target tracking algorithm can also be selected according to needs. For example, in a specific situation, it can be selected as a target tracking algorithm based on KCF (Kernel Correlation Filter).
  • KCF Kernel Correlation Filter
  • the process of the method is: for the next frame, the method first extracts Hog features from multiple surrounding areas of the selected ROI area, that is, extracts Hog features from multiple surrounding areas of the selected nozzle area, and then uses a cyclic matrix to perform Solve to get the ROI area selected in the next frame.
  • This method first proposes an effective alternative to find the solution of the objective function defined on the weighted L2 norm (1), including decomposing the objective function into each spatial dimension, and solving the matrix using a 1-dimensional fast solution method. Then, the method is extended to a more general case, by solving the objective function defined on the weighted norm Lr (0 ⁇ r ⁇ 2) or using the aggregated data items that cannot be achieved in the existing EP filter.
  • the frame I t, P t may be sampled in the vicinity of the current position, a return to the training device. This regressor can calculate the response of a small window sampling. Secondly, in the It +1 frame, samples are taken near the position P t of the previous frame, and the response of each sample is judged by the aforementioned regression. Finally, the sample with the strongest response is taken as the current frame position P t+1 , thus realizing the tracking and positioning of the nozzle area.
  • Step S104 Determine the position information of the end of the nozzle in the nozzle area of each frame of image, and determine the three-dimensional coordinates of the end of the nozzle at that moment through the position information of each frame of image collected at the same time.
  • the position information of the end of the nozzle in the nozzle area in the frame of image can be determined, and the position information of each frame of image collected at the same time can be used to determine the time
  • the three-dimensional coordinates of the end of the nozzle For example, in one occasion, using the parallax principle of binocular vision, the three-dimensional coordinate point of the end of the nozzle can be determined.
  • it may further include:
  • step S104 the adaptive boundary restriction method and the least square method are used to smoothly optimize each frame of image.
  • the adaptive boundary limitation method and the least square method are used to smoothly optimize each frame of the image, so as to effectively keep the edge of the object from being damaged, while the rest The non-edge areas are smoothed.
  • the functional expression of the adaptive boundary limit can be expressed as:
  • t i (x) represents the output image
  • I c (x) represents the input image
  • Ai represents the average value of the largest top 10% of a certain color channel of the image. and It is the maximum and minimum pixels of a certain channel image.
  • the determination of the position information of the end of the nozzle in the nozzle area of each frame of image described in step S104 may specifically include:
  • Step 1 Use K-means algorithm to segment the end image of the nozzle in the nozzle area of each frame of image;
  • Step 2 Obtain the edge image of the end image of the nozzle of each frame of image through the Canny detection algorithm
  • Step 3 Through the Hough straight line detection algorithm, determine the end of the nozzle in the edge image of the end of the nozzle of each frame of image, and obtain the position information of the end of the nozzle.
  • the K-means algorithm can be used for classification processing.
  • the selected print head area can be divided into 3 categories to distinguish the end of the print head, the printing plate surface and the printing material.
  • the category of the print head is regarded as the second category of the K-means algorithm classification process, that is, only the second category is extracted in this application, and the remaining categories are set to white, which can better shield the noise interference.
  • the edge of the print head can be obtained more effectively, that is, a more complete edge image can be extracted, and the edge points in the edge image can be used as the data points of the Hough line detection algorithm for Hough line detection. Detection, and then get the position information of the end of the nozzle.
  • FIGS. 3a to 3e are a schematic diagram of a nozzle area tracking position in a specific embodiment, a schematic diagram of a K-means detection diagram, a schematic diagram of an edge image, a diagram of a Hough straight line detection, and a schematic diagram of a positioning diagram of the end of the nozzle.
  • the cumulative threshold of the Hough line detection algorithm is set to 30, the threshold between two straight lines is set to 10, and the number of fitting lines is set to a maximum of 3, which is beneficial to more accurately determine The specific location of the end of the nozzle.
  • step three may specifically include:
  • the intersection of the Hough straight line at the end of the two nozzles is taken as the coordinate point of the end of the nozzle;
  • the Hough line at the end of the nozzle is determined by the Hough straight line detection algorithm to exceed two, the lowest point of the outer contour of the nozzle and the direction of the end of the robot arm are used to determine whether the printing is over;
  • the determined Hough line at the end of the nozzle exceeds two, that is, when there are three determined Hough lines at the end of the nozzle, it is first judged whether the printing is over. There are many ways to judge whether the printing is finished. For example, in one occasion, you can determine the position of the lowest point of the outer contour of the nozzle and compare it with the direction of the end of the robot arm. If you compare it, it is determined that the end of the robot arm is facing upwards. , Means the end of printing, so the end of the nozzle can be set as an invalid point, that is, the frame image can be regarded as an invalid image.
  • the image collection devices that need to be activated may need to be updated.
  • the image acquisition device that needs to be started is updated periodically.
  • the image acquisition device that needs to be started when the three-dimensional coordinates of the end of the nozzle cannot be detected. Update. It is understandable that after updating the image acquisition device that needs to be started, it is necessary to reuse the trained target recognition algorithm to identify the nozzle area of the first frame of image.
  • Step S105 Perform 3D printing compensation based on the determined three-dimensional coordinates of the end of the nozzle at each time and the coordinate input value of the end of the nozzle at each time.
  • step S105 may specifically include:
  • the error at this moment can be compensated for by 3D printing by controlling the movement of the printing table, which is beneficial to ensure the continuity of printing and also control the movement of the printing table. More convenient.
  • the compensation of 3D printing is realized based on visual feedback.
  • this application can determine the position information of the end of the nozzle in the nozzle area of each frame of image, and collect it at the same time.
  • the position information of each frame image determines the three-dimensional coordinates of the end of the nozzle at that time. Therefore, you can perform 3D printing based on the determined three-dimensional coordinates of the end of the nozzle at each time and the coordinate input value of the end of the nozzle at each time. Compensation. Due to the compensation of 3D printing, the solution of the present application can effectively improve the accuracy of 3D printing.
  • this application considers that due to the motion of the robotic arm, there may be occlusion for a single image capture device. Therefore, after starting 3D printing, this application starts N image capture based on the detected position information of the robotic arm. At least two image acquisition devices corresponding to the position information in the device perform continuous shooting of the robot arm, so as to ensure that the printing nozzle can be well photographed and it is not easy to be blocked. Further, before the identification of the nozzle area, this application uses a preset pre-optimization algorithm to optimize the quality of each frame image collected by each image acquisition device currently activated, thereby helping to ensure the subsequent identification of the end of the nozzle The accuracy of the three-dimensional coordinates is thus beneficial to further improve the accuracy of the 3D printing of the present application.
  • the mechanical arm is specifically a six-axis mechanical arm
  • the printing platform is specifically a four-axis linkage printing platform.
  • the movement of the six-axis robotic arm can be used to achieve spray printing on the surface of complex and subtle objects, and the end of the print nozzle can be tracked and positioned by a multi-eye camera, combined with the four-axis linkage printing platform for printing compensation, and the height on the surface of the bioprosthesis can be achieved.
  • Precision 3D printing The exterior of the 3D printing system can use aluminum alloy brackets, and the walls use relatively lightweight PC compression panels.
  • the six-axis robotic arm has six spatial degrees of freedom, high mobility, and can achieve precise positioning in complex curved spaces.
  • the print head is installed at the end of the six-axis robotic arm.
  • the six-axis robotic arm controls the print head on the surface of the bioprosthesis. Three-dimensional pattern printing.
  • the discharge of the print nozzle is controlled by an electronic pressure regulator to ensure the uniformity of the discharge.
  • the printing platform is a four-axis linkage platform with four degrees of freedom of movement, including linear movement in the three directions of X, Y, and Y and rotational movement in the Z direction, and can move at the same time.
  • the four-axis linkage platform consists of three linear modules and a high-precision turntable. Through the three-dimensional movement in space and the rotation in the Z-axis direction, the six-axis mechanical arm controls the movement and positioning of the print head, adjusts the printing position, and realizes complex curved surfaces. Patterned printing.
  • the preprocessing of the print model can be divided into two different processes.
  • the processing flow of model data is to plan a single-chip path after slicing, and finally form a complete printing path;
  • the processing flow of model data is to first find the path points on the surface. Then triangulate the path points to find the normal vector of each path point, and then calculate the attitude parameters of the control manipulator based on the normal vector, and finally combine the position of the path point to form the control file of the free-form surface spraying path.
  • the embodiment when the embodiment is specifically compensated, the end position of the nozzle is detected by binocular vision, and after the position error of the path point is calculated, the compensation is performed by moving the printing platform to overcome the above problems.
  • the compensation process can be: set the spraying path point at a distance of about 1mm in the XY plane, and send a signal to the vision system when the end of the robotic arm reaches a certain path point. After the vision system obtains the three-dimensional coordinates of the end of the nozzle, and The preset position is compared, that is, it is compared with the coordinate input value of the end of the nozzle, and the error between the two is used as the compensation value.
  • the printing platform can move back and forth according to the compensation value, that is, the movement distance is half of the compensation value, and then return to the original position.
  • the error compensation effect is shown in Figure 5.
  • the leftmost image of Figure 5 shows a schematic diagram of the uncompensated printing effect.
  • this application can move the printing platform to the right by half the compensation value distance, and then return it. That is, the distance of the compensation value that is half the distance of the printing platform moving to the left.
  • the rightmost image in Figure 5 shows the schematic diagram of the printing effect after compensation.
  • the robotic arm can be controlled to move in a hybrid path.
  • the hybrid path shown in Figure 6a has improved edge accuracy compared to the zigzag path, but , The printing material accumulates at the edge inflection point. Therefore, further, as shown in Figure 6b, you can increase the control point on the edge path, increase the running speed of the turning path after the control point and reduce the material extrusion pressure, which can effectively reduce Material accumulation at small inflection points.
  • the control of the end position and posture of the robotic arm is the core and key technology of 3D printing.
  • a line laser scanner can be used to obtain the point cloud data on the surface, and the path to control the printing of the robotic arm can be generated according to the printing point interval requirements.
  • Point and then triangulate, and calculate the normal vector of the vertex of each triangle, use the normal vector as the basis to calculate the robot arm attitude control parameters, and combine the position parameters to generate the robot arm control vector.
  • FIG. 7a The surface model of Fig. 7a is used to illustrate the data processing process of free-form surface spraying.
  • the point cloud scanned by the line laser sensor is shown in Figure 7b. It can be seen that the point cloud obtained by scanning has a fault in the height direction of the surface, which has a greater impact on the accuracy of surface spraying. Therefore, it is necessary to fit the surface point cloud according to the error situation.
  • the scanning interval of the X-axis point cloud of the sensor is set to 0.3mm, and the scanning interval of the Y-axis is 1mm.
  • two-dimensional point fitting can be used instead of three-dimensional surface reconstruction. Fit the X-axis cross-section of the point cloud model first, and then fit the Y-axis cross-section.
  • the fitting method is the least square method. After fitting each group of points, they are merged into a new three-dimensional model. When the model surface is more complicated, the fitting function of each point set can be different. This can ensure that the final generated surface model has a higher accuracy. Take a set of points as an example for graphical display.
  • the results of the second-order function fitting are closest to the original model. As shown in Figure 8a, you can see the original points, the second-order function fitting results, and the fourth-order function fitting results .
  • the original point cloud model and the fitted point cloud model are shown in Figure 8b, and the step effect error is eliminated after fitting.
  • the normal vector of each point in the point cloud model needs to be calculated as the attitude control parameter when the robot arm sprays to that point.
  • the triangulation method is used to reconstruct the surface, and the vector is calculated by the adjacent point and cross-multiplied to obtain the normal vector of the point. The calculation result is shown in Figure 9.
  • the attitude control parameters of the UR3 manipulator need to be calculated according to the normal vector.
  • the calculation process is to first calculate the corresponding Euler angle (roll, pitch, yaw) according to the space normal vector, and then convert it into the rotation vector Rx, Ry, Rz that controls the posture of the robotic arm. Combining the control positions x, y, z of the path points, the 6-dimensional control vector (x, y, z, Rx, Ry, Rz) of the robotic arm can be obtained.
  • the 6-dimensional control vector of all path points of the free-form surface can be obtained, thereby realizing printing control.
  • the embodiment of the present invention also provides a 3D printing device, which can be referred to as above.
  • the 3D printing device may include:
  • the main body of the 3D printing device The main body of the 3D printing device
  • the controller is used to execute a computer program to implement the steps of the 3D printing method described in any of the above embodiments.
  • the main body of the 3D printing device refers to the rest of the 3D printing device except the controller, and the specific configuration can be set and adjusted according to the 3D printing device used in the actual application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Materials Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Manufacturing & Machinery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)

Abstract

一种打印方法和装置,包括:在启动3D打印后,通过机械臂的位置信息,启动至少2个图像采集装置进行机械臂的连续拍摄;将采集到的各帧图像进行质量优化;利用经过训练的目标识别算法识别出首帧图像的喷头区域,并利用目标跟踪算法在后续各帧图像中进行喷头区域的跟踪;确定出每一帧图像的喷头区域中的喷头末端的位置信息,并通过在同一时刻采集到的各帧图像的位置信息,确定出该时刻的喷头末端的三维坐标;基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。应用本申请的方案,有效地提高了3D打印的精度。

Description

一种3D打印方法和装置
本申请要求于2020年05月13日提交至中国专利局、申请号为202010403841.2、发明名称为“基于多轴联动控制和机器视觉反馈测量的3D打印装置及方法”的中国专利申请的优先权,和2020年05月13日提交至国际专利局、申请号为PCT/CN2020/090093、发明名称为“基于多轴联动控制和机器视觉反馈测量的3D打印装置及方法”的国际专利申请的优先权,二者的全部内容通过引用结合在本申请中。
技术领域
本发明涉及打印技术领域,特别是涉及一种3D打印方法和装置。
背景技术
随着技术的发展,3D打印得到了越来越广泛的应用,例如,因为一些疾病、交通事故等导致人体骨骼严重损伤,许多病人失去生活自理能力,给病人及其家庭都带来了严重的影响。而针对骨骼损伤问题,特别是大块骨缺损的修复,依靠自体组织移植、异体组织移植或使用替代材料修复等方法均存在较大弊端,例如存在需要两次手术、取骨量限制、可能传播疾病、成骨活性低下等缺点。人工骨由于具有可以取代传统自体或同种异体骨,避免患者二次创伤等优点,成为了人工骨支架材料及制备的研究热点。而3D打印技术由于可以调控支架孔径、孔隙率、连通性以及比表面积,还可以实现支架个体化定制,因此,3D打印被越来越多地应用到人工骨支架材料的制备中。
但是,目前的3D打印设备应用于人工骨支架材料制备时,以及应用在一些其他领域器件制备时,会存在精度不高的问题,还有的方案会进行简单的修正,但精度仍旧有限,从而不利于保障3D打印出的器件的稳定性和安全性。
综上所述,如何有效地提高3D打印的精度,是目前本领域技术人员急需解决的技术问题。
发明内容
本发明的目的是提供一种3D打印方法和装置,以有效地提高3D打印的精度。
为解决上述技术问题,本发明提供如下技术方案:
一种3D打印方法,包括:
在接收到输入的打印模型,并启动3D打印之后,通过检测出的机械臂的位置信息,启动N个图像采集装置中对应于所述位置信息的至少2个图像采集装置进行所述机械臂的连续拍摄;其中,N为不小于2的正整数;
通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化;
针对任意一个图像采集装置采集到的经过质量优化之后的各帧图像,利用经过训练的目标识别算法识别出首帧图像的喷头区域,并利用目标跟踪算法在后续各帧图像中进行所述喷头区域的跟踪;
确定出每一帧图像的所述喷头区域中的喷头末端的位置信息,并通过在同一时刻采集到的各帧图像的位置信息,确定出该时刻的喷头末端的三维坐标;
基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。
优选的,所述通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,包括:
针对任意一帧待进行质量优化的图像,通过颜色转换函数将该帧图像转换为HSV图像;
针对该帧图像转换出的HSV图像中的任意一个像素点,当该像素点不符合预设的第一范围时,去除该像素点,并将得到的图像进行平滑处理;
对进行平滑处理之后的图像进行轮廓提取,得到通过基于颜色分割的第一预优化算法进行了质量优化的图像。
优选的,预设的预优化算法包括基于颜色分割的第一预优化算法和基于快速多曝光融合的第二预优化算法,且基于颜色分割的第一预优化算法为默认的预优化算法;
所述3D打印方法还包括:
对本次3D打印过程中,当前已经得到的各帧进行了质量优化的图像的亮度平均值进行更新;
当判断出所述亮度平均值低于预设的亮度阈值时,在本次3D打印过程中关闭基于颜色分割的第一预优化算法,并开启基于快速多曝光融合的第二预优化算法;
相应的,通过基于快速多曝光融合的第二预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,包括:
针对任意一帧待进行质量优化的图像,确定出该帧图像的颜色权重图,并且在将该帧图像转换为灰度图之后,得到该帧图像的局部对比度权重和曝光权重图;
将所述曝光权重图与所述颜色权重图相乘并进行归一化处理,将归一化后得到的结果与所述局部对比度权重相乘,并通过滤波,得到该帧图像的融合权重;
基于所述融合权重对待进行质量优化的该帧图像进行融合,得到通过基于快速多曝光融合的第二预优化算法进行了质量优化的图像。
优选的,所述确定出每一帧图像的所述喷头区域中的喷头末端的位置信息,包括:
通过K-means算法,分割出每一帧图像的所述喷头区域中的喷头末端图像;
通过Canny检测算法获得各帧图像的喷头末端图像的边缘图像;
通过Hough直线检测算法,确定出各帧图像的喷头末端图像的边缘图像中的喷头末端,并得到喷头末端的位置信息。
优选的,Hough直线检测算法的累计阈值设置为30,两直线之间的阈值设置为10,拟合条数设置为最多3条;
相应的,所述通过Hough直线检测算法,确定出各帧图像的喷头末端图像中的喷头末端,包括:
当通过Hough直线检测算法,确定出2条喷头末端的Hough直线时,将2条喷头末端的Hough直线的交点作为喷头末端的坐标点;
当通过Hough直线检测算法,确定出的喷头末端的Hough直线超过2条时,通过喷头的外轮廓最低点位置和机械臂末端朝向,判断打印是否结束;
如果是,则将该帧图像视为无效图像;
如果否,则选取在同一侧的2条直线中更靠近另一侧直线的那一条直线,与另一侧直线的交点,作为喷头末端的坐标点。
优选的,所述基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿,包括:
针对确定出的任意时刻的喷头末端的三维坐标,判断该时刻的喷头末端的三维坐标与该时刻的喷头末端的坐标输入值的误差是否超过预设的误差范围;
如果是,则对该时刻的误差进行3D打印的补偿;
如果否,则忽略该时刻的误差。
优选的,所述对该时刻的误差进行3D打印的补偿,包括:
通过对打印台的移动控制,对该时刻的误差进行3D打印的补偿。
优选的,经过训练的所述目标识别算法为基于CNN的目标识别算法。
优选的,还包括:
在确定出每一帧图像的所述喷头区域中的喷头末端的位置信息之前,利用自适应边界限制法和最小二乘法对每一帧图像进行平滑优化。
一种3D打印装置,包括:
3D打印装置主体;
控制器,用于执行计算机程序以实现上述任一项所述的3D打印方法的步骤。
应用本发明实施例所提供的技术方案,基于视觉反馈实现3D打印的补偿,具体的,本申请可以确定出每一帧图像的喷头区域中的喷头末端的位置信息,并通过在同一时刻采集到的各帧图像的位置信息,确定出该时刻的喷头末端的三维坐标,因此,便可以基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。由于进行了3D打印的补偿,使得本申请的方案可以有效地提高3D 打印的精度。并且,本申请考虑到由于机械臂的运动,对于单个图像采集装置而言可能存在遮挡的情况,因此,本申请在启动3D打印之后,通过检测出的机械臂的位置信息,启动N个图像采集装置中对应于位置信息的至少2个图像采集装置进行机械臂的连续拍摄,从而保障可以很好地拍到打印喷头,不容易出现遮挡。进一步的,在进行喷头区域的识别之前,本申请通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,从而有利于保障后续识别出的喷头末端的三维坐标的准确性,从而有利于进一步的提高本申请的3D打印的精度。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明中一种3D打印方法的实施流程图;
图2为本发明中一种具体实施方式中的3D打印装置的结构示意图;
图3a,图3b,图3c,图3d,图3e,依次为本发明一种具体实施方式中的喷头区域跟踪位置示意图,K-means检测图示意图,边缘图像示意图,Hough直线检测示意图以及喷头末端的定位示意图;
图4a,图4b,图4c,依次为求取Hough直线交点的不同情况的示意图;
图5为本发明中一种具体实施方式中的补偿效果示意图;
图6a和图6b分别为本发明中一种具体实施方式中的混合型打印路径示意图和增加控制点的示意图;
图7a和图7b分别为本发明中一种具体实施方式中的自由曲面模型及其点云的示意图
图8a和图8b分别为本发明中一种具体实施方式中的截面及整体点云拟合示意图。
具体实施方式
本发明的核心是提供一种3D打印方法,有效地提高了3D打印的精度。
为了使本技术领域的人员更好地理解本发明方案,下面结合附图和具体实施方式对本发明作进一步的详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参考图1,图1为本发明中一种3D打印方法的实施流程图,该3D打印方法可以包括以下步骤:
步骤S101:在接收到输入的打印模型,并启动3D打印之后,通过检测出的机械臂的位置信息,启动N个图像采集装置中对应于位置信息的至少2个图像采集装置进行机械臂的连续拍摄;其中,N为不小于2的正整数。
开启打印机,通过计算机输入一个需要3D打印的模型,便可以进行3D打印,实现对于机械臂的控制。
本申请需要拍摄机械臂,从而得到机械臂末端的喷头末端的三维坐标,并且本申请考虑到,如果利用1个图像采集装置进行拍摄,可能会随着机械臂的运动,在部分角度时,该图像采集装置无法拍摄到喷头末端,因此,本申请在启动3D打印之后,通过检测出的机械臂的位置信息,启动N个图像采集装置中对应于位置信息的至少2个图像采集装置进行机械臂的连续拍摄。
可以理解的是,对应于位置信息的各个图像采集装置,可以预先进行设定,通常可以设定为在当前的机械臂的位置信息下,能够有效地拍摄到喷头末端的各个图像采集装置,因此,机械臂的位置信息不同,相应启动的图像采集装置便不同。例如在图2的场合中,N=4,并且每次均启动4个图像采集装置中正对于机械臂10的倾斜方向的2个图像采集装置进行机械臂的连续拍摄。当然,其他场合中,可以设置有更多数量的图像采集装置,但通常而言,N=4,且每次启动正对于机械臂的倾斜方向的2个图像 采集装置,通常就能够很好地拍摄到喷头末端,从而有利于准确地确定出喷头末端的三维坐标。图2中,六轴机械臂,打印喷头,工业相机,以及打印平台分别标示为10,20,30以及40。
还需要说明的是,该种实施方式中,4个图像采集装置构成2套双目系统,其他实施方式中可以根据需要动态地增加若干套双目系统。双目系统具有效率高、精度合适、系统结构简单、成本低等优点,非常适合于制造现场的在线、非接触产品检测和质量控制。对运动物体测量时,由于图像获取是在瞬间完成的,因此平行双目系统是一种更有效的测量方法。而由于机械臂在运动过程中存在遮挡的情况,利用两套双目系统,通常能够保证喷头末端能够被检测到。
此外,在实际应用中,随着机械臂的位置信息的不断变化,可以周期性地进行所需要启动的图像采集装置的更新。当然,还可以有其他的更新触发方式,例如,在后文的一种实施方式中,可以在通过Hough直线检测算法,确定出的喷头末端的Hough直线超过2条,且打印未结束时,进行所需要启动的图像采集装置的更新。当然,其他场合中,还可以有更多的需要启动的图像采集装置的更新的触发方式,根据实际需要进行设定即可。
图像采集装置通常可以是工业相机,机械臂的具体类型也可以根据需要进行设定,例如图2的场合中为六轴机械臂,4个工业相机设置在机械臂的四周。打印平台则设置在4个工业相机中间。
步骤S102:通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化。
本申请的方案中,在进行喷头区域的识别之前,会通过预设的预优化算法进行图像的质量优化,从而使得后续识别出的喷头末端的三维坐标的准确性更高,也就有利于进一步的提高本申请的3D打印的精度。
预优化算法的具体类型可以根据实际需要进行设定,例如在本发明的一种具体实施方式中,步骤S102的操作可以具体包括:
步骤一:针对任意一帧待进行质量优化的图像,通过颜色转换函数将该帧图像转换为HSV图像;
步骤二:针对该帧图像转换出的HSV图像中的任意一个像素点,当该 像素点不符合预设的第一范围时,去除该像素点,并将得到的图像进行平滑处理;
步骤三:对进行平滑处理之后的图像进行轮廓提取,得到通过基于颜色分割的第一预优化算法进行了质量优化的图像。
该种实施方式中,是基于颜色分割的第一预优化算法进行图像的预优化。首先,对输入的彩色图像,通过颜色转换函数转换到HSV(Hue Saturation Value,色调,饱和度,亮度)颜色空间,即得到HSV图像。然后,对于HSV图像中的任意一个像素点,判断是否符合预设的第一范围。需要说明的是,第一范围是由H,S,V各自的预设范围所构成的第一范围。
例如一种具体场合中,H,S,V各自的预设范围如表一所示,该种场合中是考虑到喷头颜色可以有红色、紫色、绿色、青色和蓝色五种,因此,基于这5种颜色设置了H,S,V各自的预设范围,进而构成了预设的第一范围。也就是说,该种场合中,针对任意一个像素点,当该像素点的H,S,V值符合表一的范围时,则保留该像素点,反之则去掉该像素点。
表一:
Figure PCTCN2021093520-appb-000001
之后,再将得到的图像进行平滑处理,例如可以选择中值滤波进行平滑,以去掉单点的噪声,最后,对进行平滑处理之后的图像进行轮廓提取,便可以得到通过基于颜色分割的第一预优化算法进行了质量优化的图像。对进行平滑处理之后的图像进行轮廓提取,也就是绘制每个独立物体的外接矩形,并且通过长宽比和面积来去掉多余的无关矩形内的目标。
该种实施方式中通过基于颜色分割的第一预优化算法进行图像的质量优化,优化之后的图像中只剩下打印喷涂的图像。并且,基于颜色分割的 第一预优化算法操作简单方便,有利于降低方案执行的耗时。
进一步的,在本发明的一种具体实施方式中,考虑到基于颜色分进行预优化时,在部分场合中,得到的图像可能亮度不足,不利于后续准确地确定出喷头末端的三维坐标,因此,该种实施方式中,预设的预优化算法包括基于颜色分割的第一预优化算法和基于快速多曝光融合的第二预优化算法,且基于颜色分割的第一预优化算法为默认的预优化算法,也就是说,每次进行3D打印时,默认是先使用基于颜色分割的第一预优化算法。
该种实施方式中,3D打印方法还包括:
对本次3D打印过程中,当前已经得到的各帧进行了质量优化的图像的亮度平均值进行更新;
当判断出亮度平均值低于预设的亮度阈值时,在本次3D打印过程中关闭基于颜色分割的第一预优化算法,并开启基于快速多曝光融合的第二预优化算法。
该种实施方式中,在启动了3D打印之后,会不断计算各帧进行了质量优化的图像的亮度,并且进行亮度平均值的更新,如果更新出的亮度平均值低于预设的亮度阈值,说明得到的图像亮度不足,因此该种实施方式中会开启基于快速多曝光融合的第二预优化算法。
而通过基于快速多曝光融合的第二预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,可以具体包括以下步骤:
步骤一:针对任意一帧待进行质量优化的图像,确定出该帧图像的颜色权重图,并且在将该帧图像转换为灰度图之后,得到该帧图像的局部对比度权重和曝光权重图;
步骤二:将曝光权重图与颜色权重图相乘并进行归一化处理,将归一化后得到的结果与局部对比度权重相乘,并通过滤波,得到该帧图像的融合权重;
步骤三:基于融合权重对待进行质量优化的该帧图像进行融合,得到通过基于快速多曝光融合的第二预优化算法进行了质量优化的图像。
具体的,可以先把输入的图像转化为灰度图,此外,相邻帧图像还可以运用gamma校正进行不同程度的初始校正,进行高低通滤波。然后,可 以获取这些图像中的各个像素最大的亮度值,作为局部对比度权重。
在求取曝光权重图时,可以运用判别方法,对灰度图像进行亮度判断,例如亮度阈值为30,在[30,255-30]之间认为是合理的亮度区间,该区间便设定为1,其余便设定为0,从而可以得到曝光权重图。
而对输入的图像进行直方图均衡化处理,再运用中值滤波方法,可以获取到初始的颜色权重图,之后再运用膨胀和腐蚀操作,可以求取最终的颜色权重图;
之后,将曝光权重图和颜色权重图相乘,再对二者结果进行归一化处理,归一化的结果再与局部对比度权重相乘,可以获得初始的融合权重,之后运用递归滤波方法对初始融合权重进行滤波,可以获得最终的融合权重;
最后,根据最终的融合权重,融合输入的图像,便是通过基于快速多曝光融合的第二预优化算法,完成了图像的质量优化。
步骤S103:针对任意一个图像采集装置采集到的经过质量优化之后的各帧图像,利用经过训练的目标识别算法识别出首帧图像的喷头区域,并利用目标跟踪算法在后续各帧图像中进行喷头区域的跟踪。
目标识别算法需要能够识别出图像的喷头区域,考虑到卷积的神经网络的广泛应用,目标识别算法可以具体为基于CNN(Convolutional Neural Networks,卷积神经网络)的目标识别算法。
目标识别算法需要预先进行训练,以基于CNN的目标识别算法为例。可以预先通过计算机输入多个比较复杂的打印模型,例如输入至少三个打印模型,例如可以选取曲面的变换和种类较多的模型,可以选取尽可能多地包含打印过程中的所有曲面形式的模型,输入之后,让打印机空打起来,即在不通气和不加入打印材料的情况下,使打印机运行。通过架设在打印机周围的多个相机,分别采集运动中的打印喷头视频,并转存为各帧图像,直至打印结束。之后,通过标记各帧图像中的喷头区域,作为训练的样本。最后,通过构建的CNN训练网络进行训练,便可以获取最终训练结果,即得到经过训练的目标识别算法。
本申请的方案中,针对任意一个图像采集装置采集到的经过质量优化 之后的各帧图像,可以利用经过训练的目标识别算法识别出首帧图像的喷头区域,之后,便可以利用目标跟踪算法在后续各帧图像中进行喷头区域的跟踪,这样的方式相较于每一帧图像都由目标识别算法进行喷头区域的识别,效率会更高。
目标跟踪算法的具体类型也可以根据需要进行选取,例如一种具体场合中可以选取为基于KCF(Kernel Correlation Filter,核相关滤波)的目标跟踪算法。该方法的流程是:对于下一帧,该方法首先对选定的ROI区域的多个周围区域提取Hog特征,即对选定的喷头区域的多个周围区域提取Hog特征,再用循环矩阵进行求解,得到下一帧选定的ROI区域。该方法首先提出一种有效的替代方案,来寻求定义在加权L2范数(1)上的目标函数的解,包括将目标函数分解为每个空间维度,并使用1维快速求解方法求解矩阵。然后,将该方法扩展到更一般的情况,通过求解加权范数Lr(0<r<2)上定义的目标函数或使用在现有EP滤波器,该滤波器中不能实现的聚集数据项。
在实际应用中,在I t帧中,可以在当前位置P t附近采样,训练一个回归器。这个回归器能计算一个小窗口采样的响应。其次,在I t+1帧中,在前一帧位置P t附近采样,用前述回归器判断每个采样的响应。最后,响应最强的采样作为本帧位置P t+1,从而实现了对于喷头区域的跟踪定位。
步骤S104:确定出每一帧图像的喷头区域中的喷头末端的位置信息,并通过在同一时刻采集到的各帧图像的位置信息,确定出该时刻的喷头末端的三维坐标。
确定出每一帧图像的喷头区域之后,便可以确定该帧图像中的喷头区域中的喷头末端的位置信息,进而通过在同一时刻采集到的各帧图像的位置信息,便可以确定出该时刻的喷头末端的三维坐标。例如一种场合中,利用双目视觉的视差原理,便可以确定出喷头末端的三维坐标点。
进一步的,在本发明的一种具体实施方式中,还可以包括:
在执行步骤S104之前,利用自适应边界限制法和最小二乘法对每一帧图像进行平滑优化。
该种实施方式中,在确定喷头末端的位置信息之前,会先利用自适应 边界限制法和最小二乘法对每一帧图像进行平滑优化,从而可以有效地保持物体的边缘不受破坏,而其余的非边缘区域得到平滑。
自适应边界限制的函数表达式可以表示为:
Figure PCTCN2021093520-appb-000002
其中,t i(x)表示的是输出图像,I c(x)表示的是输入图像,A i表示的是图像某个颜色通道的最大前10%的均值,
Figure PCTCN2021093520-appb-000003
Figure PCTCN2021093520-appb-000004
是某个通道图像的像素最大值和最小值。通过自适应边界限制法进行优化处理后,再用最小二乘法进行滤波。
在本发明的一种具体实施方式中,步骤S104中描述的确定出每一帧图像的喷头区域中的喷头末端的位置信息,可以具体包括:
步骤一:通过K-means算法,分割出每一帧图像的喷头区域中的喷头末端图像;
步骤二:通过Canny检测算法获得各帧图像的喷头末端图像的边缘图像;
步骤三:通过Hough直线检测算法,确定出各帧图像的喷头末端图像的边缘图像中的喷头末端,并得到喷头末端的位置信息。
具体的,可以运用K-means算法,进行分类处理。在实际应用中,可以将选定的喷头区域分成3类,以区别喷头末端、打印盘面和打印物质。并且,将打印喷头的类别作为K-means算法分类处理的第二类,即本申请只提取第二类,其余的分类设置为白色,能较好的屏蔽掉噪声干扰。
之后,采用Canny检测算法,就能较为有效地获得打印喷头末端的边缘,即能够提取出较完整的边缘图像,该边缘图像中的边缘点便可以作为Hough直线检测算法的数据点,进行Hough直线检测,进而得到喷头末端的位置信息。
可参阅图3a至图3e,依次为一种具体实施方式中的喷头区域跟踪位置示意图,K-means检测图示意图,边缘图像示意图,Hough直线检测示意图以及喷头末端的定位示意图。
在本发明的一种具体实施方式中,考虑到边缘不是很光滑,检测的直 线的条数较多,经过试验验证和理论分析发现,当累计的阈值设置为30,两直线之间的阈值设置为10时,能较准确的提取喷头两边的直线。
因此,在本发明的一种具体实施方式中Hough直线检测算法的累计阈值设置为30,两直线之间的阈值设置为10,拟合条数设置为最多3条,有利于较为准确地确定出喷头末端的具体位置。
该种实施方式中,上述步骤三可以具体包括:
当通过Hough直线检测算法,确定出2条喷头末端的Hough直线时,将2条喷头末端的Hough直线的交点作为喷头末端的坐标点;
当通过Hough直线检测算法,确定出的喷头末端的Hough直线超过2条时,通过喷头的外轮廓最低点位置和机械臂末端朝向,判断打印是否结束;
如果是,则将该帧图像视为无效图像;
如果否,则选取在同一侧的2条直线中更靠近另一侧直线的那一条直线,与另一侧直线的交点,作为喷头末端的坐标点。
该种实施方式中,考虑到如果是通过Hough直线检测算法,确定出2条喷头末端的Hough直线,直接将这2条喷头末端的Hough直线的交点作为喷头末端的坐标点即可,如图4a所示。
而确定出的喷头末端的Hough直线超过2条时,即确定出的喷头末端的Hough直线有3条时,首先判断打印是否结束。判断打印是否结束的方式也有多种,例如一种场合中,可以确定出喷头的外轮廓最低点位置,并与机械臂末端的方向进行比较,如果通过比较,确定出机械臂末端方向是朝上的,意味着打印结束,因此该喷头端点可以设置为无效点,即该帧图像可以视为无效图像。
而如果通过比较,确定出机械臂末端方向是朝下的,说明此时机械臂依旧工作,即打印未结束,则可以选取在同一侧的2条直线中更靠近另一侧直线的那一条直线,与另一侧直线的交点,作为喷头末端的坐标点。具体的,可以依据直线的斜率进行判断,对于在同一侧的2条直线,取斜率绝对值比较小的那条直线,将该直线与另一侧的1条直线的交点作为喷头末端的坐标点,如图4b所示。而如果同一侧的2条直线斜率都趋于无穷大, 如图4c所示,则选取靠内侧的直线与另一侧直线的交点,作为喷头末端的坐标点。
此外,实际应用中还会有一些其他类型的异常情况,例如通过Hough直线检测算法,确定出的所有的直线都在同一侧,则可以选取实际打印的值与打印喷头外轮廓最低点的位置点的平均值作为喷头末端的坐标点。
还需要说明的是,在实际应用中,如果确定出的喷头末端的坐标点位于确定出的Hough直线的上方或者外侧时,不管打印机是否在工作,都可以说明该打印过程趋近于停止,可以立即停止检测,删掉该坐标点。
此外,如上文的描述,随着机械臂的位置信息的不断变化,当前启动的各个图像采集装置中可能存在部分图像采集装置无法拍摄到喷头,即所需要启动的图像采集装置可能需要更新,具体触发方式可以有多种,例如前文中描述的,周期性地进行所需要启动的图像采集装置的更新,又如,可以在无法检测出喷头末端的三维坐标时,进行所需要启动的图像采集装置的更新。可以理解的是,更新了所需要启动的图像采集装置之后,需要重新利用经过训练的目标识别算法识别出首帧图像的喷头区域。
步骤S105:基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。
前述步骤中,描述了确定出各个时刻的喷头末端的三维坐标,便可以结合各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。
具体的补偿方式可以根据情况进行具体设定,例如在本发明的一种具体实施方式中,步骤S105可以具体包括:
针对确定出的任意时刻的喷头末端的三维坐标,判断该时刻的喷头末端的三维坐标与该时刻的喷头末端的坐标输入值的误差是否超过预设的误差范围;
如果是,则对该时刻的误差进行3D打印的补偿;
如果否,则忽略该时刻的误差。
该种实施方式中,考虑到如果误差较低,则可以无需补偿,有利于保障打印的效率和连续性,只有在喷头末端的三维坐标与该时刻的喷头末端的坐标输入值的误差超过预设的误差范围时,即误差太大时进行补偿。
此外,具体进行补偿时,在一种具体场合中,可以通过对打印台的移动控制,对该时刻的误差进行3D打印的补偿,有利于保障打印的连续性,并且对打印台的移动控制也较为方便。
应用本发明实施例所提供的技术方案,基于视觉反馈实现3D打印的补偿,具体的,本申请可以确定出每一帧图像的喷头区域中的喷头末端的位置信息,并通过在同一时刻采集到的各帧图像的位置信息,确定出该时刻的喷头末端的三维坐标,因此,便可以基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。由于进行了3D打印的补偿,使得本申请的方案可以有效地提高3D打印的精度。并且,本申请考虑到由于机械臂的运动,对于单个图像采集装置而言可能存在遮挡的情况,因此,本申请在启动3D打印之后,通过检测出的机械臂的位置信息,启动N个图像采集装置中对应于位置信息的至少2个图像采集装置进行机械臂的连续拍摄,从而保障可以很好地拍到打印喷头,不容易出现遮挡。进一步的,在进行喷头区域的识别之前,本申请通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,从而有利于保障后续识别出的喷头末端的三维坐标的准确性,从而有利于进一步的提高本申请的3D打印的精度。
在一种具体场合中,机械臂具体采用六轴机械臂、打印平台具体为四轴联动打印平台。利用六轴机械臂的运动可以实现复杂微细的物体表面喷涂打印,并通过多目相机对打印喷头末端进行跟踪定位,结合四轴联动打印平台进行打印补偿,可以实现在生物假体表面上的高精度3D打印。3D打印系统外部可以采用铝合金支架,墙壁采用比较轻便的PC压缩板。
六轴机械臂具有六个空间自由度,运动灵活性高,能够实现复杂曲面空间的精确定位,打印喷头安装在六轴机械臂的末端,由六轴机械臂控制打印喷头在生物假体表面的三维图案化打印。打印喷头的出料采用电子调压阀进行稳压控制,保证出料的均匀性。
打印平台为四轴联动平台,具有四个运动自由度,包括X,Y,Y三个方向的直线运动及Z方向的转动运动,能够同时进行运动。四轴联动平台由三个直线模组及一个高精度转台组成,通过在空间的三维运动及Z轴 方向的旋转,配合六轴机械臂控制喷头的运动定位,进行打印位置的调整,实现复杂曲面的图案化打印。
根据不同的3D打印要求,打印模型预处理可以分为两种不同的流程。对于堆积成型3D打印,模型数据的处理流程是切片后进行单片路径规划,最终形成完整的打印路径;对于自由曲面涂层3D打印,模型数据的处理流程是先求出曲面上的路径点,然后对路径点进行三角剖分,求出每个路径点的法向量,再根据法向量计算控制机械臂的姿态参数,最后结合路径点的位置形成自由曲面喷涂路径的控制文件。
对于平面内堆积成型打印,首先可以把打印模型规划的路径点转换到UR3的世界坐标下,同时固定喷头姿态、设置速度和加速度,以直线移动控制UR3实现平面打印。
进行实际打印测试时,为了保证打印的连续性与行进速度,一般将同一直线上相同的点进行简化处理。但是打印过程中发现UR3机械臂行进速度较快时会有抖动现象,造成直线运动时实际路径并不是直线,而是类似正弦波的无规则曲线,使得打印精度较低。而机械臂本身通过位置控制其行进路径,在未到达指定位置时无法反馈控制其位置,因此误差难以消除。而达到指定位置发现有误差,需要再发送一次指令控制其运动到原先的指定位置,这样的控制流程将造成打印过程不连续,严重影响其打印速度,也将对打印表面的均匀度产生严重影响。
因此,在保证打印连续的前提下,本实施例具体进行补偿时,通过双目视觉检测出喷头末端位置,计算出路径点的位置误差之后,是通过移动打印平台进行补偿从而克服上述问题。
具体场合中,补偿过程可以是:在XY平面内以约1mm的距离设置喷涂路径点,当机械臂末端达到某个路径点时发送信号给视觉系统,视觉系统得到喷头末端的三维坐标之后,与预设位置进行比较,即与喷头末端的坐标输入值进行比较,将二者的误差作为补偿值。打印平台便可以根据补偿值进行来回移动,即移动距离为一半的补偿值,之后再回到原位。误差补偿效果如图5所示,图5的最左图表示的是未补偿的打印效果示意图,得到补偿值之后,本申请可以将打印平台向右移动距离一半的补偿值距离, 之后回位,即打印平台向左移动距离一半的补偿值距离,图5的最右图便是表示的是经过补偿后的打印效果示意图。
此外,通过打印实验发现,降低机械臂运行速度,即降低打印速度也可以大大减小机械臂的抖动,从而提高打印精度。同时,影响打印精度的另一个关键因素是材料的挤出速度,其通过控制气压的大小来调节,需要与打印速度协调控制。
直接通过Z字型打印出的样本平面均匀度较差,特别是在拐角处有凸包现象。为了优化打印的平面均匀度,在本发明的一种具体实施方式中,可以控制机械臂以混合型路径进行运动,图6a示出的混合路径相比于Z字型路径提高了边缘精度,但是,在边缘拐点处打印材料有堆积现象,因此,进一步的,如图6b所示,可以在边缘路径增加控制点,提高控制点后拐弯路径的运行速度以及减小材料挤出气压,可以有效减小拐点的材料堆积问题。
机械臂末端位置与姿态的控制是3D打印的核心关键技术,对于任意自由曲面,首先可以利用线激光扫描仪获取曲面上的点云数据,根据打印点间隔需求拟合生成控制机械臂打印的路径点,然后进行三角剖分,并计算每个三角面片顶点的法向量,以法向量为依据计算机械臂姿态控制参数,结合位置参数生成机械臂控制向量。
以图7a的曲面模型来说明自由曲面喷涂的数据处理过程。利用线激光传感器扫描得到的点云如图7b所示。可以看到扫描得到的点云在曲面高度方向出现断层,对于曲面喷涂精度有较大影响,因此需根据误差情况对曲面点云进行拟合。
而考虑到打印的间隔和精度等因素,设置传感器X轴点云的扫描间隔为0.3mm,Y轴的扫描间隔为1mm。同时,为了简化重构算法,可以用二维点拟合代替三维曲面重建。先拟合点云模型X轴的横截面,再拟合Y轴横截面,拟合方法为最小二乘法。每组点拟合后合并成新的三维模型。当模型曲面比较复杂时,各点集的拟合函数可以不同。这样可以保证最终生成的曲面模型具有更高的精度。以一组点集为例进行图形展示,采用2次函数拟合的结果最接近原始模型,如图8a所示,可以看出原始点,2次函 数拟合结果,以及4次函数拟合结果。原始点云模型与拟合后的点云模型如图8b所示,拟合后消除了台阶效应误差。
随后,需计算点云模型中每个点的法向量,作为机械臂喷涂到该点时的姿态控制参数。采用三角剖分法进行曲面重建,通过临近点计算向量并叉乘获取该点的法向量。计算结果如图9所示。
得到每个点的法向量后,需根据法向量计算UR3机械臂的姿态控制参量。其计算过程为先根据空间法向量计算其对应的Euler角(roll,pitch,yaw),再转换为控制机械臂姿态的旋转向量Rx,Ry,Rz。结合路径点的控制位置x,y,z,即可获得机械臂的6维控制向量(x,y,z,Rx,Ry,Rz)。
以空间法向量[1 2 3]为例说明姿态参数计算过程,如图10所示。当姿态参量为[0 0 0]时,UR3末端姿态为向量[0 0 1]。当以XYZ固定角坐标系描述欧拉角时,向量[1 2 3]的roll角为向量[0 2 3]与向量[0 0 1]的夹角0.588(以弧度表示),方向为负;其pitch角为向量[1 0 3]与向量[0 0 1]的夹角0.322,方向为正;而其yaw角为0。
已知Euler角γ,β,α,则旋转矩阵为:
Figure PCTCN2021093520-appb-000005
根据旋转矩阵计算其θ角和k x,k y,k z
Figure PCTCN2021093520-appb-000006
Figure PCTCN2021093520-appb-000007
Figure PCTCN2021093520-appb-000008
Figure PCTCN2021093520-appb-000009
Figure PCTCN2021093520-appb-000010
则其旋转向量为:[Rx Ry Rz] T=[k xθ k yθ k zθ] T
对于空间法向量[1 2 3],其γ=0.588,β=0.2705,α=0,则:
[Rx Ry Rz] T=[0.584 0.2626 -0.0795] T
根据以上计算过程,即可获得自由曲面所有路径点的6维控制向量,从而实现打印控制。
相应于上面的3D打印方法的实施例,本发明实施例还提供了一种3D打印装置,可与上文相互对应参照。该3D打印装置中可以包括:
3D打印装置主体;
控制器,用于执行计算机程序以实现上述任一实施例中所述的3D打印方法的步骤。3D打印装置主体指的是3D打印装置中除了控制器之外的其余部分,具体构成可以根据实际应用中采用的3D打印装置进行设定和调整。
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的技术方案及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。

Claims (10)

  1. 一种3D打印方法,其特征在于,包括:
    在接收到输入的打印模型,并启动3D打印之后,通过检测出的机械臂的位置信息,启动N个图像采集装置中对应于所述位置信息的至少2个图像采集装置进行所述机械臂的连续拍摄;其中,N为不小于2的正整数;
    通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化;
    针对任意一个图像采集装置采集到的经过质量优化之后的各帧图像,利用经过训练的目标识别算法识别出首帧图像的喷头区域,并利用目标跟踪算法在后续各帧图像中进行所述喷头区域的跟踪;
    确定出每一帧图像的所述喷头区域中的喷头末端的位置信息,并通过在同一时刻采集到的各帧图像的位置信息,确定出该时刻的喷头末端的三维坐标;
    基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿。
  2. 根据权利要求1所述的3D打印方法,其特征在于,所述通过预设的预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,包括:
    针对任意一帧待进行质量优化的图像,通过颜色转换函数将该帧图像转换为HSV图像;
    针对该帧图像转换出的HSV图像中的任意一个像素点,当该像素点不符合预设的第一范围时,去除该像素点,并将得到的图像进行平滑处理;
    对进行平滑处理之后的图像进行轮廓提取,得到通过基于颜色分割的第一预优化算法进行了质量优化的图像。
  3. 根据权利要求2所述的3D打印方法,其特征在于,预设的预优化算法包括基于颜色分割的第一预优化算法和基于快速多曝光融合的第二预优化算法,且基于颜色分割的第一预优化算法为默认的预优化算法;
    所述3D打印方法还包括:
    对本次3D打印过程中,当前已经得到的各帧进行了质量优化的图像 的亮度平均值进行更新;
    当判断出所述亮度平均值低于预设的亮度阈值时,在本次3D打印过程中关闭基于颜色分割的第一预优化算法,并开启基于快速多曝光融合的第二预优化算法;
    相应的,通过基于快速多曝光融合的第二预优化算法,将当前启动的各个图像采集装置采集到的各帧图像进行质量优化,包括:
    针对任意一帧待进行质量优化的图像,确定出该帧图像的颜色权重图,并且在将该帧图像转换为灰度图之后,得到该帧图像的局部对比度权重和曝光权重图;
    将所述曝光权重图与所述颜色权重图相乘并进行归一化处理,将归一化后得到的结果与所述局部对比度权重相乘,并通过滤波,得到该帧图像的融合权重;
    基于所述融合权重对待进行质量优化的该帧图像进行融合,得到通过基于快速多曝光融合的第二预优化算法进行了质量优化的图像。
  4. 根据权利要求1所述的3D打印方法,其特征在于,所述确定出每一帧图像的所述喷头区域中的喷头末端的位置信息,包括:
    通过K-means算法,分割出每一帧图像的所述喷头区域中的喷头末端图像;
    通过Canny检测算法获得各帧图像的喷头末端图像的边缘图像;
    通过Hough直线检测算法,确定出各帧图像的喷头末端图像的边缘图像中的喷头末端,并得到喷头末端的位置信息。
  5. 根据权利要求4所述的3D打印方法,其特征在于,Hough直线检测算法的累计阈值设置为30,两直线之间的阈值设置为10,拟合条数设置为最多3条;
    相应的,所述通过Hough直线检测算法,确定出各帧图像的喷头末端图像中的喷头末端,包括:
    当通过Hough直线检测算法,确定出2条喷头末端的Hough直线时,将2条喷头末端的Hough直线的交点作为喷头末端的坐标点;
    当通过Hough直线检测算法,确定出的喷头末端的Hough直线超过2 条时,通过喷头的外轮廓最低点位置和机械臂末端朝向,判断打印是否结束;
    如果是,则将该帧图像视为无效图像;
    如果否,则选取在同一侧的2条直线中更靠近另一侧直线的那一条直线,与另一侧直线的交点,作为喷头末端的坐标点。
  6. 根据权利要求1所述的3D打印方法,其特征在于,所述基于确定出的各个时刻的喷头末端的三维坐标,以及各个时刻的喷头末端的坐标输入值,进行3D打印的补偿,包括:
    针对确定出的任意时刻的喷头末端的三维坐标,判断该时刻的喷头末端的三维坐标与该时刻的喷头末端的坐标输入值的误差是否超过预设的误差范围;
    如果是,则对该时刻的误差进行3D打印的补偿;
    如果否,则忽略该时刻的误差。
  7. 根据权利要求6所述的3D打印方法,其特征在于,所述对该时刻的误差进行3D打印的补偿,包括:
    通过对打印台的移动控制,对该时刻的误差进行3D打印的补偿。
  8. 根据权利要求1所述的3D打印方法,其特征在于,经过训练的所述目标识别算法为基于CNN的目标识别算法。
  9. 根据权利要求1至8任一项所述的3D打印方法,其特征在于,还包括:
    在确定出每一帧图像的所述喷头区域中的喷头末端的位置信息之前,利用自适应边界限制法和最小二乘法对每一帧图像进行平滑优化。
  10. 一种3D打印装置,其特征在于,包括:
    3D打印装置主体;
    控制器,用于执行计算机程序以实现如权利要求1至9任一项所述的3D打印方法的步骤。
PCT/CN2021/093520 2020-05-13 2021-05-13 一种3d打印方法和装置 WO2021228181A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CNPCT/CN2020/090093 2020-05-13
PCT/CN2020/090093 WO2021226891A1 (zh) 2020-05-13 2020-05-13 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法
CN202010403841 2020-05-13
CN202010403841.2 2020-05-13

Publications (1)

Publication Number Publication Date
WO2021228181A1 true WO2021228181A1 (zh) 2021-11-18

Family

ID=78525350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093520 WO2021228181A1 (zh) 2020-05-13 2021-05-13 一种3d打印方法和装置

Country Status (2)

Country Link
CN (1) CN113674299A (zh)
WO (1) WO2021228181A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114778158A (zh) * 2022-04-13 2022-07-22 青岛博瑞科三维制造有限公司 一种3d打印装置的自检系统及方法
CN115122331A (zh) * 2022-07-04 2022-09-30 中冶赛迪工程技术股份有限公司 工件抓取方法及装置
CN116454583A (zh) * 2023-06-20 2023-07-18 扬州市宜楠科技有限公司 一种基站滤波器外壳的生产控制方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114474732A (zh) * 2022-01-28 2022-05-13 上海联泰科技股份有限公司 数据处理方法、系统、3d打印方法、设备及存储介质
CN114454177A (zh) * 2022-03-15 2022-05-10 浙江工业大学 一种基于双目立体视觉的机器人末端位置补偿方法
CN115097785B (zh) * 2022-06-29 2023-05-09 西安电子科技大学 一种五轴联动曲面喷墨打印按位置采样触发方法
CN116405607B (zh) * 2023-06-06 2023-08-29 深圳市捷鑫华科技有限公司 3d打印机用的音频与图像智能交互方法
CN117400539B (zh) * 2023-12-15 2024-03-01 北京师范大学 一种专用于信息科技教育的3d打印控制系统

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063712A (zh) * 2010-11-04 2011-05-18 北京理工大学 基于子带结构的多曝光图像融合方法
CN102074017A (zh) * 2009-11-23 2011-05-25 北京工业大学 一种杠铃中心点检测及跟踪的方法和装置
CN104943176A (zh) * 2015-06-23 2015-09-30 南京信息工程大学 基于图像识别技术的3d打印机及其打印方法
CN105818376A (zh) * 2015-01-23 2016-08-03 施乐公司 用于识别和控制三维物体打印机中的z轴打印头位置的系统和方法
CN106264796A (zh) * 2016-10-19 2017-01-04 泉州装备制造研究所 一种基于多轴联动控制和机器视觉测量的3d打印系统
US20180297114A1 (en) * 2017-04-14 2018-10-18 Desktop Metal, Inc. Printed object correction via computer vision
WO2018199884A1 (en) * 2017-04-24 2018-11-01 Hewlett-Packard Development Company, L.P. Determining print orders
CN109080145A (zh) * 2018-07-11 2018-12-25 泉州装备制造研究所 一种基于3d打印视觉末端反馈控制方法和系统
CN109080146A (zh) * 2018-07-28 2018-12-25 中国科学院福建物质结构研究所 一种基于分类的3d打印喷头末端轮廓实时提取方法
CN109080144A (zh) * 2018-07-10 2018-12-25 泉州装备制造研究所 基于中心点判断的3d打印喷头末端实时跟踪定位方法
CN109177175A (zh) * 2018-07-10 2019-01-11 泉州装备制造研究所 一种3d打印喷头末端实时跟踪定位方法
US20200061912A1 (en) * 2017-07-27 2020-02-27 Xerox Corporation Method for alignment of a multi-nozzle extruder in three-dimensional object printers

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228598B (zh) * 2016-07-25 2018-11-13 北京工业大学 一种面向面曝光3d打印的模型自适应光照均匀化方法
CN109130167A (zh) * 2018-07-11 2019-01-04 泉州装备制造研究所 一种基于相关滤波的3d打印喷头末端跟踪方法
CN110414403A (zh) * 2019-07-22 2019-11-05 广东工业大学 一种基于机器视觉的3d打印过程监控方法

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074017A (zh) * 2009-11-23 2011-05-25 北京工业大学 一种杠铃中心点检测及跟踪的方法和装置
CN102063712A (zh) * 2010-11-04 2011-05-18 北京理工大学 基于子带结构的多曝光图像融合方法
CN105818376A (zh) * 2015-01-23 2016-08-03 施乐公司 用于识别和控制三维物体打印机中的z轴打印头位置的系统和方法
CN104943176A (zh) * 2015-06-23 2015-09-30 南京信息工程大学 基于图像识别技术的3d打印机及其打印方法
CN106264796A (zh) * 2016-10-19 2017-01-04 泉州装备制造研究所 一种基于多轴联动控制和机器视觉测量的3d打印系统
US20180297114A1 (en) * 2017-04-14 2018-10-18 Desktop Metal, Inc. Printed object correction via computer vision
WO2018199884A1 (en) * 2017-04-24 2018-11-01 Hewlett-Packard Development Company, L.P. Determining print orders
US20200061912A1 (en) * 2017-07-27 2020-02-27 Xerox Corporation Method for alignment of a multi-nozzle extruder in three-dimensional object printers
CN109080144A (zh) * 2018-07-10 2018-12-25 泉州装备制造研究所 基于中心点判断的3d打印喷头末端实时跟踪定位方法
CN109177175A (zh) * 2018-07-10 2019-01-11 泉州装备制造研究所 一种3d打印喷头末端实时跟踪定位方法
CN109080145A (zh) * 2018-07-11 2018-12-25 泉州装备制造研究所 一种基于3d打印视觉末端反馈控制方法和系统
CN109080146A (zh) * 2018-07-28 2018-12-25 中国科学院福建物质结构研究所 一种基于分类的3d打印喷头末端轮廓实时提取方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114778158A (zh) * 2022-04-13 2022-07-22 青岛博瑞科三维制造有限公司 一种3d打印装置的自检系统及方法
CN114778158B (zh) * 2022-04-13 2023-03-31 青岛博瑞科增材制造有限公司 一种3d打印装置的自检系统及方法
CN115122331A (zh) * 2022-07-04 2022-09-30 中冶赛迪工程技术股份有限公司 工件抓取方法及装置
CN116454583A (zh) * 2023-06-20 2023-07-18 扬州市宜楠科技有限公司 一种基站滤波器外壳的生产控制方法
CN116454583B (zh) * 2023-06-20 2023-09-05 扬州市宜楠科技有限公司 一种基站滤波器外壳的生产控制方法

Also Published As

Publication number Publication date
CN113674299A (zh) 2021-11-19

Similar Documents

Publication Publication Date Title
WO2021228181A1 (zh) 一种3d打印方法和装置
WO2021226891A1 (zh) 基于多轴联动控制和机器视觉反馈测量的3d打印装置及方法
CN107767423B (zh) 一种基于双目视觉的机械臂目标定位抓取方法
CN107798330A (zh) 一种焊缝图像特征信息提取方法
CN111604598A (zh) 一种机械臂进给式激光刻蚀系统的对刀方法
CN107239748A (zh) 基于棋盘格标定技术的机器人目标识别与定位方法
CN114219842B (zh) 港口集装箱自动装卸作业中的视觉识别、测距与定位方法
CN110930368B (zh) 一种薄板搭接焊缝实时焊接图像特征提取方法
CN103544714A (zh) 一种基于高速图像传感器的视觉追踪系统及方法
CN112947526B (zh) 一种无人机自主降落方法和系统
CN110796700A (zh) 基于卷积神经网络的多物体抓取区域定位方法
CN113103215A (zh) 一种机器人视觉飞拍的运动控制方法
CN111709365A (zh) 一种基于卷积神经网络的人体运动姿态自动检测方法
CN108161930A (zh) 一种基于视觉的机器人定位系统及方法
CN113867373A (zh) 无人机降落方法、装置、停机坪以及电子设备
CN108628267B (zh) 一种物方扫描成像系统的分离式、分布式操控方法
CN104268551B (zh) 基于视觉特征点的转向角度控制方法
CN109886173B (zh) 基于视觉的侧脸姿态解算方法及情绪感知自主服务机器人
CN115147488A (zh) 一种基于密集预测的工件位姿估计方法与抓取系统
CN103949054A (zh) 红外光枪定位方法及系统
CN113554713A (zh) 飞机蒙皮移动机器人制孔视觉定位及检测方法
Fucen et al. The object recognition and adaptive threshold selection in the vision system for landing an unmanned aerial vehicle
CN112347830A (zh) 一种工厂防疫管理方法以及防疫管理系统
CN116476074A (zh) 基于混合现实技术的远程机械臂操作系统及人机交互方法
CN107423766B (zh) 一种混联式汽车电泳涂装输送机构末端运动位姿检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803755

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21803755

Country of ref document: EP

Kind code of ref document: A1