CN112743851A - Photocuring 3D printing method, 3D printer, computer device and medium - Google Patents

Photocuring 3D printing method, 3D printer, computer device and medium Download PDF

Info

Publication number
CN112743851A
CN112743851A CN202011596912.1A CN202011596912A CN112743851A CN 112743851 A CN112743851 A CN 112743851A CN 202011596912 A CN202011596912 A CN 202011596912A CN 112743851 A CN112743851 A CN 112743851A
Authority
CN
China
Prior art keywords
gray
line segment
photocuring
pixel points
printing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011596912.1A
Other languages
Chinese (zh)
Inventor
刘辉林
唐京科
陈春
敖丹军
刘洪�
贺淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chuangxiang 3D Technology Co Ltd
Original Assignee
Shenzhen Chuangxiang 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chuangxiang 3D Technology Co Ltd filed Critical Shenzhen Chuangxiang 3D Technology Co Ltd
Priority to CN202011596912.1A priority Critical patent/CN112743851A/en
Publication of CN112743851A publication Critical patent/CN112743851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • B29C64/393Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/10Processes of additive manufacturing
    • B29C64/106Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material
    • B29C64/124Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material using layers of liquid which are selectively solidified
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y30/00Apparatus for additive manufacturing; Details thereof or accessories therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • B33Y50/02Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention provides a 3D printing method, a 3D printer, a computer device and a computer readable storage medium. The photocuring 3D printing method comprises the following steps: carrying out layering processing on the three-dimensional model to obtain a plurality of cross section layers, and acquiring the profile information of each cross section layer; selecting a line segment formed by two adjacent edge contour points from the section layer, determining pixel points through which the line segment passes, and calculating the area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points; determining the gray value of the inner area according to the area ratio; when a gray map is generated based on the profile information of the cross-sectional layer, the gray values of the inner region are modified to the gray map. According to the invention, edge anti-aliasing is realized by processing the edge profile information instead of performing edge anti-aliasing processing on the gray-scale image, so that the 3D printing precision can be improved.

Description

Photocuring 3D printing method, 3D printer, computer device and medium
Technical Field
The invention relates to the field of Three-dimensional (3D) printing, in particular to a photocuring 3D printing method, a 3D printer, a computer device and a computer readable storage medium.
Background
3D printing, namely a rapid prototyping technology, is an accumulation manufacturing technology, also called additive manufacturing, which is a digital model file-based method for manufacturing a three-dimensional object by printing a layer of adhesive material layer by using the adhesive material such as special wax material, powdered metal or plastic and the like. In the existing photocuring 3D printing, a 3D model is firstly layered, each layer is converted into a gray image with different pixel gray values, then each layer of image is irradiated by light, and different numbers of photosensitive resins are sensed by different gray values so as to cure the model. Since the edges of the model appear in the image to span different pixels and may occupy different values for each pixel, the edges are exposed to different gray values of the pixels to achieve precise edge control by curing different amounts of photosensitive resin.
The traditional method for processing the gray value of the edge pixel is to perform edge anti-aliasing processing on an image at a software processing level, and is to perform visual anti-aliasing on the image. The gray level image required by printing is generated by the slicing software through the outline of each layer, when the gray level image is generated, the edge has some errors, then the generated image is subjected to anti-aliasing operation, the actual result can generate secondary errors, and the edge of the model which is printed still has great errors. How to solve the above problems is considered by those skilled in the art
Disclosure of Invention
In view of the foregoing, the present invention provides a photocuring 3D printing method, a 3D printer, a computer device, and a computer-readable storage medium, which perform edge processing on contour data before generating a grayscale map, so as to achieve edge anti-aliasing and improve 3D printing accuracy.
An embodiment of the application provides a photocuring 3D printing method, including: carrying out layering processing on the three-dimensional model to obtain a plurality of cross section layers, and acquiring the profile information of each cross section layer; selecting a line segment formed by two adjacent edge contour points from the section layer, determining pixel points through which the line segment passes, and calculating the area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points; determining the gray value of the inner area according to the area ratio; when a gray map is generated based on the profile information of the cross-sectional layer, the gray values of the inner region are modified to the gray map.
In some embodiments, the photocuring 3D printing method further comprises: and modeling the object to be 3D printed to obtain the three-dimensional model.
In some embodiments, the step of determining the pixel point through which the line segment passes includes: acquiring coordinates of two edge contour points according to the contour information of the cross section layer; calculating the slope of the line segment based on the coordinates of the two edge contour points; and determining pixel points through which the line segment passes according to the calculated slope and the coordinates of the two edge contour points.
In some embodiments, the step of calculating the area ratio of the inner area formed by the segment-divided pixel points to the pixel points includes: acquiring all line segments passing through the pixel points; and calculating the area ratio of the pixel points occupied by the inner area formed by dividing the pixel points by all the line segments together.
In some embodiments, the gray-level value of the inner region is calculated by the following calculation formula: 255 (S1/S2), wherein S1 is the area of the inner region, and S2 is the area of the pixel.
In some embodiments, the step of modifying the gray values of the inner region to a gray map comprises: modifying the gray value of the pixel area corresponding to the inner area on the gray map into the gray value of the inner area; or modifying the gray value of the pixel point corresponding to the inner area on the gray map into the gray value of the inner area.
In some embodiments, the step of calculating the area ratio of the inner area formed by the segment-divided pixel points to the pixel points comprises: and connecting the line segments formed by two adjacent edge contour points end to form a closed contour graph.
An embodiment of the application provides a 3D printer, and the 3D printer may execute the steps of the photocuring 3D printing method.
An embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a plurality of computer programs, and the processor is configured to control a 3D printer to execute the steps of the photocuring 3D printing method when executing the computer programs stored in the memory.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, controls a 3D printer to perform the steps of the above-mentioned photocuring 3D printing method.
The photocuring 3D printing method, the photocuring 3D printer, the computer device and the computer readable storage medium are different from the existing processing mode of performing edge anti-aliasing on a generated gray-scale image, but before the gray-scale image is generated, edge gray-scale value calculation is performed on profile information obtained by slicing a model, and the edge gray-scale value obtained by calculation is modified to a subsequently generated gray-scale image, namely edge anti-aliasing is realized by processing the edge profile information instead of performing edge anti-aliasing on the gray-scale image, so that the 3D printing precision can be improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a photocuring 3D printing method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a cross-sectional layer obtained by performing a layering process on a 3D model according to an embodiment of the present invention.
Fig. 3 is a functional block diagram of a photocuring 3D printing apparatus according to an embodiment of the present invention.
FIG. 4 is a schematic diagram of a 3D printer according to an embodiment of the invention
FIG. 5 is a diagram of a computer device according to an embodiment of the present invention.
Description of the main elements
Photocuring 3D printing device 10
Memory 20
Processor 30
First computer program 42
Second computer program 44
Layer module 101
Computing Module 102
Determination module 103
Modification module 104
3D Printer 100
Computer device 200
Main controller 1001
Extruder module 1002
Motor 1003
Manipulation display module 1004
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings. In addition, the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The application provides a photocuring 3D printing method, which comprises the following steps: carrying out layering processing on the three-dimensional model to obtain a plurality of cross section layers, and acquiring the profile information of each cross section layer; selecting a line segment formed by two adjacent edge contour points from a section layer, determining pixel points through which the line segment passes, and calculating the area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points; determining the gray value of the inner area according to the area ratio; when a gray map is generated based on the profile information of the cross-sectional layer, the gray value of the inner region is modified to the gray map.
According to the photocuring 3D printing method, edge processing is performed on the contour data before the gray-scale image is generated, edge anti-aliasing is achieved, and 3D printing precision can be improved.
The photocuring 3D printing method can be applied to a 3D printer or a computer device. The computer device may be a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The computer device may be a desktop computer, a notebook computer, a server, an industrial computer, or other computing equipment. The computer device can be in man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
Fig. 1 is a flowchart illustrating steps of a preferred embodiment of a photo-curing 3D printing method according to the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
Referring to fig. 1, the photocuring 3D printing method may specifically include the following steps.
And step S11, carrying out layering processing on the 3D model to obtain a plurality of cross section layers, and acquiring the contour information of each cross section layer.
In some embodiments, the 3D model may be layered using preset slicing software, obtaining a plurality of cross-sectional layers, and obtaining contour information of each cross-sectional layer.
In some embodiments, information of an object to be 3D printed may be acquired and a three-dimensional model corresponding to the object to be 3D printed may be built. The object to be 3D printed may be specified according to actual requirements, and is not limited herein.
Step S12, selecting a line segment formed by two adjacent edge contour points from a section layer, determining pixel points through which the line segment passes, and calculating the area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points.
In some embodiments, a cross-sectional layer may be arbitrarily selected from a plurality of cross-sectional layers, a line segment formed by two adjacent edge contour points may be arbitrarily selected from the cross-sectional layer, a pixel point through which the line segment passes may be determined, for example, a slope of the line segment may be calculated based on coordinates of the two edge contour points, and then the pixel point through which the line segment passes may be determined according to the slope of the line segment and the coordinates of the two edge contour points.
In some embodiments, when a line segment passes through a pixel point, the pixel point may be divided into an inner region and an outer region, the inner region being a region located within the profile of the cross-sectional layer, and the outer region being a region located outside the profile of the cross-sectional layer. After determining which pixels the line segment passes through, the area ratio of the pixels occupied by the inner area formed by dividing the pixels by the line segment can be calculated.
Fig. 2 is a schematic outline diagram of a cross-sectional layer S1 obtained by layering a 3D model. The edge contour points of the cross-section layer S1 include a point a, a point B, a point C, a point D, a point E, and a point F, and a line segment formed by two adjacent edge contour points is connected end to form a closed contour graph. Assume that a line segment L1 is selected from the cross-section layer S1 and is formed by two adjacent edge contour points A, B, and the line segment L1 passes through pixels P1, P2 and P3. The coordinates of each edge contour point a-F can be obtained according to the contour information of the cross-section layer a1, and the coordinates of each pixel point are known, that is, the coordinates of the edge point M, O, N, Z in the pixel point P2 are known, for example, the coordinates of the edge point M are (a1, b1), and the coordinates of the edge point N are (a2, b 2). Assuming that the obtained coordinates of the point a are (x1, y1), the coordinates of the point B are (x2, y2), an intersection point of the line segment L1 and the pixel point P2 is defined as R, S, the coordinates of the intersection point R are (x, B1), and the coordinates of the intersection point S are (a2, y), the line segment L1 divides the pixel point P2 to form an inner region RZS and an outer region RMONS. The equation for the line segment L1 is:
(x-x1)/(x2-x1)=(y-y1)/(y2-y1)…i;
the equation for the straight line ZN is:
x=x3…ii;
the equation for the straight line MZ is:
y=y4…iii;
simultaneous equations i and ii can be solved for the value of y, and simultaneous equations i and iii can be solved for the value of x, which in turn can result in the coordinates of intersection R, S.
When the coordinates of the intersection point R, S are calculated, the area S11 of the inner region RZS may be calculated from the coordinates of the intersection point R, S and the coordinates of the edge point Z. Meanwhile, the area S12 of the quadrangle MONZ can be calculated according to the coordinates of the edge point M, O, N, Z, and S12 is the area of the pixel point P2. Thus, the area ratio of the inner area RZS formed by dividing the pixel point P2 by the line segment L1 to the pixel point P2 is S11/S12.
In some embodiments, if two or more line segments exist in the pixel, all the line segments passing through the pixel may be obtained, and the area ratio of an inner region formed by dividing the pixel by all the line segments to the pixel is calculated.
For example, for the pixel point P3, the coordinates of the edge point N, Z, V, U in the pixel point P3 are known, two adjacent edge contour points B, C form a line segment L2, the line segments L1 and L2 pass through the pixel point P3, the intersection point of the line segment L1 and the pixel point P3 is S, B, the intersection point of the line segment L2 and the pixel point P3 is B, W, and the line segments L1 and L2 divide the pixel point P3 to form an inner region zsv and two outer regions SNB and BUW. By adopting the similar calculation mode, the coordinates of the intersection point S, W can be calculated, and the area S13 of the inner area zbwv can be calculated according to the coordinates of the intersection point S, B, W and the coordinates of the edge point Z, V. Meanwhile, the quadrilateral NZVU area S14 can be obtained through calculation according to the coordinates of the edge point N, Z, V, U, and S14 is the area of the pixel point P3. Thus, the area ratio of an inner area zswbv formed by dividing the pixel point P3 by the line segments L1 and L2 to the pixel point P3 is S13/S14.
And S13, determining the gray value of the inner area according to the calculated area ratio.
In an embodiment, when the area ratio of the inner region to the pixel point is obtained through calculation, the gray value of the inner region is obtained through calculation according to the following calculation formula: 255 (S1/S2), wherein S1 is the area of the inner region, and S2 is the area of the pixel.
For example, the gray scale value of the inner region RZS may be calculated to be 255 × and the gray scale value of the inner region zsv may be calculated to be S13/S.
S14, when generating a gray map based on the profile information of the cross-sectional layer, modifying the gray value of the inner region to the gray map.
In an embodiment, when the gray scale map is generated based on the profile information of the cross-section layer S1, the gray scale value of the inner region may be modified to the gray scale map, so as to perform an edge anti-aliasing operation on the generated gray scale map, and then the light is irradiated onto each layer of gray scale map, and different numbers of photosensitive resins are sensed by different gray scale values, so as to cure different numbers of photosensitive resins to form precise edge control, thereby improving the 3D printing precision.
In an embodiment, the gray value of the pixel point region corresponding to the inner region on the gray map may be modified to the gray value of the inner region. For example, the gray level value of the pixel region corresponding to the inner region RZS on the gray scale map is modified to 255 × (S11/S12), that is, only the gray level value of the partial region of the pixel P2 on the gray scale map is modified to 255 × (S11/S12).
In an embodiment, the gray value of the pixel point corresponding to the inner region on the gray map may also be modified to the gray value of the inner region. For example, the gray value of the pixel corresponding to the inner region RZS on the gray map is modified to 255 × (S11/S12), that is, the gray value of the pixel P2 on the gray map is modified to 255 × 255 (S11/S12).
It is understood that, for the pixel points through which the line segments formed by the other edge contour points of the cross-section layer S1 pass, the gray value of the inner area can be modified to the gray map in the manner described above. For other cross section layers of the 3D model, the processing mode of the cross section layer S1 can be referred to, the edge anti-aliasing operation is completed, and the edge control of the 3D model is accurately realized.
The photocuring 3D printing method is different from the existing processing mode of performing edge anti-aliasing on the generated gray-scale image, but before the gray-scale image is generated, edge gray-scale value calculation is performed on the profile information obtained by slicing the model, and the edge gray-scale value obtained by calculation is modified to the subsequently generated gray-scale image, namely edge anti-aliasing is realized by processing the edge profile information instead of performing edge anti-aliasing on the gray-scale image, so that the 3D printing precision can be improved.
Fig. 3 is a functional block diagram of a photocuring 3D printing apparatus according to a preferred embodiment of the invention.
Referring to fig. 3, the photocuring 3D printing apparatus 10 may include a layering module 101, a calculation module 102, a determination module 103, and a modification module 104.
The layering module 101 is configured to perform layering processing on the 3D model to obtain a plurality of cross-section layers, and obtain profile information of each cross-section layer.
In some embodiments, the layering module 101 may perform layering processing on the 3D model based on preset slicing software to obtain a plurality of cross-sectional layers, and obtain contour information of each cross-sectional layer.
In some embodiments, information of an object to be 3D printed may be acquired and a three-dimensional model corresponding to the object to be 3D printed may be built. The object to be 3D printed may be specified according to actual requirements, and is not limited herein.
The calculating module 102 is configured to select a line segment formed by two adjacent edge contour points from a cross-sectional layer, determine pixel points through which the line segment passes, and calculate an area ratio of an inner region formed by dividing the pixel points by the line segment to the pixel points.
In some embodiments, the calculation module 102 may arbitrarily select a cross-sectional layer from a plurality of cross-sectional layers, arbitrarily select a line segment formed by two adjacent edge contour points from the cross-sectional layer, determine a pixel point through which the line segment passes, for example, calculate a slope of the line segment based on coordinates of the two edge contour points, and then determine the pixel point through which the line segment passes according to the slope of the line segment and the coordinates of the two edge contour points.
In some embodiments, when a line segment passes through a pixel point, the pixel point may be divided into an inner region and an outer region, the inner region being a region located within the profile of the cross-sectional layer, and the outer region being a region located outside the profile of the cross-sectional layer. After determining which pixels the line segment passes through, the area ratio of the pixels occupied by the inner area formed by dividing the pixels by the line segment can be calculated.
Fig. 2 is a schematic outline diagram of a cross-sectional layer S1 obtained by layering a 3D model. The edge contour points of the cross-section layer S1 include a point a, a point B, a point C, a point D, a point E, and a point F, and a line segment formed by two adjacent edge contour points is connected end to form a closed contour graph. Assume that a line segment L1 is selected from the cross-section layer S1 and is formed by two adjacent edge contour points A, B, and the line segment L1 passes through pixels P1, P2 and P3. The coordinates of each edge contour point a-F can be obtained according to the contour information of the cross-section layer a1, and the coordinates of each pixel point are known, that is, the coordinates of the edge point M, O, N, Z in the pixel point P2 are known, for example, the coordinates of the edge point M are (a1, b1), and the coordinates of the edge point N are (a2, b 2). Assuming that the obtained coordinates of the point a are (x1, y1), the coordinates of the point B are (x2, y2), an intersection point of the line segment L1 and the pixel point P2 is defined as R, S, the coordinates of the intersection point R are (x, B1), and the coordinates of the intersection point S are (a2, y), the line segment L1 divides the pixel point P2 to form an inner region RZS and an outer region RMONS. The equation for the line segment L1 is:
(x-x1)/(x2-x1)=(y-y1)/(y2-y1)…i;
the equation for the straight line ZN is:
x=x3…ii;
the equation for the straight line MZ is:
y=y4…iii;
simultaneous equations i and ii can be solved for the value of y, and simultaneous equations i and iii can be solved for the value of x, which in turn can result in the coordinates of intersection R, S.
When the coordinates of the intersection point R, S are calculated, the area S11 of the inner region RZS may be calculated from the coordinates of the intersection point R, S and the coordinates of the edge point Z. Meanwhile, the area S12 of the quadrangle MONZ can be calculated according to the coordinates of the edge point M, O, N, Z, and S12 is the area of the pixel point P2. Thus, the area ratio of the inner area RZS formed by dividing the pixel point P2 by the line segment L1 to the pixel point P2 is S11/S12.
In some embodiments, if there are two or more line segments in the pixel, the calculating module 102 may obtain all the line segments passing through the pixel, and calculate an area ratio of an inner region formed by dividing the pixel by all the line segments to the pixel.
For example, for the pixel point P3, the coordinates of the edge point N, Z, V, U in the pixel point P3 are known, two adjacent edge contour points B, C form a line segment L2, the line segments L1 and L2 pass through the pixel point P3, the intersection point of the line segment L1 and the pixel point P3 is S, B, the intersection point of the line segment L2 and the pixel point P3 is B, W, and the line segments L1 and L2 divide the pixel point P3 to form an inner region zsv and two outer regions SNB and BUW. By adopting the similar calculation mode, the coordinates of the intersection point S, W can be calculated, and the area S13 of the inner area zbwv can be calculated according to the coordinates of the intersection point S, B, W and the coordinates of the edge point Z, V. Meanwhile, the quadrilateral NZVU area S14 can be obtained through calculation according to the coordinates of the edge point N, Z, V, U, and S14 is the area of the pixel point P3. Thus, the area ratio of an inner area zswbv formed by dividing the pixel point P3 by the line segments L1 and L2 to the pixel point P3 is S13/S14.
The determining module 103 is configured to determine a gray value of the inner region according to the calculated area ratio.
In an embodiment, when the area ratio of the inner area to the pixel point is obtained through calculation, the determining module 103 obtains the gray value of the inner area through the following calculation formula: 255 (S1/S2), wherein S1 is the area of the inner region, and S2 is the area of the pixel.
For example, the gray scale value of the inner region RZS may be calculated to be 255 × and the gray scale value of the inner region zsv may be calculated to be S13/S.
The modification module 104 is configured to modify the gray value of the inner region to a gray map when generating the gray map based on the profile information of the cross-sectional layer.
In an embodiment, when the gray scale map is generated based on the profile information of the cross-sectional layer S1, the modification module 104 may modify the gray scale value of the inner region to the gray scale map, so as to perform an edge anti-aliasing operation on the generated gray scale map, and then irradiate light onto each layer of the gray scale map, so that different numbers of photosensitive resins are sensed by different gray scale values, so as to cure different numbers of photosensitive resins to form precise edge control, thereby improving the 3D printing precision.
In an embodiment, the modification module 104 may modify the gray-level value of the pixel point region corresponding to the inner region on the gray-level map to the gray-level value of the inner region. For example, the gray level value of the pixel region corresponding to the inner region RZS on the gray scale map is modified to 255 × (S11/S12), that is, only the gray level value of the partial region of the pixel P2 on the gray scale map is modified to 255 × (S11/S12).
In an embodiment, the modification module 104 may also modify the gray scale value of the pixel point corresponding to the inner region on the gray scale map to the gray scale value of the inner region. For example, the gray value of the pixel corresponding to the inner region RZS on the gray map is modified to 255 × (S11/S12), that is, the gray value of the pixel P2 on the gray map is modified to 255 × 255 (S11/S12).
It is understood that, for the pixel points through which the line segments formed by the other edge contour points of the cross-section layer S1 pass, the gray value of the inner area can be modified to the gray map in the manner described above. For other cross section layers of the 3D model, the processing mode of the cross section layer S1 can be referred to, the edge anti-aliasing operation is completed, and the edge control of the 3D model is accurately realized.
The photocuring 3D printing device is different from the existing processing mode of performing edge anti-aliasing on a generated gray-scale image, edge gray-scale value calculation is performed on profile information obtained by slicing a model before the gray-scale image is generated, the edge gray-scale value obtained by calculation is modified to a subsequently generated gray-scale image, namely edge anti-aliasing is realized by processing the edge profile information instead of performing edge anti-aliasing on the gray-scale image, and 3D printing precision can be improved.
FIG. 4 is a schematic diagram of a 3D printer according to a preferred embodiment of the present invention.
The 3D printer 100 includes a main controller 1001, an extruder module 1002, a motor 1003, and a manipulation display module 1004.
The main controller 1001 may operate the motor 1003 according to a program, operate the display module 1004 to display information, perform data communication, and the like. Extruder module 1002 may include an extruder, heating rod, etc., which may enable heating, extrusion, etc., of the consumable. The manipulation display module 1004 may include a button, a touch display, etc. and may allow a user to input a control command, display a use situation and a printing situation of the 3D printer 100, etc.
The first computer program 42 may be stored in a Flash memory of the main controller 1001, and the main controller 1001 may implement the steps in the above-described embodiment of the photo-curing 3D printing method, such as the steps S11 to S14 shown in fig. 1, when the first computer program 42 is executed by the main controller 1001. Alternatively, the main controller 1001 may implement the functions of the modules in the above-described embodiment of the photo-curing 3D printing apparatus, such as the modules 101 to 104 in fig. 3, when executing the first computer program 42.
Illustratively, the first computer program 42 may be partitioned into one or more modules/units that are stored in the Flash memory of the host controller 1001 and executed by the host controller 1001 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the first computer program 42 in the 3D printer 100. For example, the first computer program 42 may be divided into a hierarchy module 101, a calculation module 102, a determination module 103, and a modification module 104 in FIG. 3.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the 3D printer 100, does not constitute a limitation of the 3D printer 100, may include more or less components than those shown, or combine some components, or different components, for example, the 3D printer 100 may further include a communication module or the like
FIG. 5 is a diagram of a computer device according to a preferred embodiment of the present invention.
The computer arrangement 200 comprises a memory 20, a processor 30 and a second computer program 44 stored in the memory 20 and executable on the processor 30. The processor 30, when executing the second computer program 44, implements the steps of controlling the 3D printer 100 to perform the above-described photocuring 3D printing method embodiments, such as steps S11-S14 shown in fig. 1. Alternatively, the processor 30, when executing the second computer program 44, implements the functions of controlling the 3D printer 100 to execute the modules in the above-described embodiment of the photocuring 3D printing apparatus, such as the modules 101 to 104 in fig. 3.
Illustratively, the second computer program 44 may be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 30. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the second computer program 44 in the computer device 200. For example, the second computer program 44 may likewise be divided into a hierarchy module 101, a calculation module 102, a determination module 103, and a modification module 104 in FIG. 3.
The computer device 200 may be a desktop computer, a notebook, a palm computer, an industrial computer, a tablet computer, a server, or other computing devices. Those skilled in the art will appreciate that the depicted schematic diagram is merely an example of computer apparatus 200 and does not constitute a limitation of computer apparatus 200 and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., computer apparatus 200 may also include input-output devices, network access devices, buses, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor 30 may be any conventional processor or the like.
The memory 20 may be used to store the second computer program 44 and/or the module/unit, and the processor 30 implements various functions of the computer apparatus 200 by running or executing the computer program and/or the module/unit stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data) created according to the use of the computer apparatus 200, and the like. In addition, the memory 20 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other non-volatile solid state storage device.
The modules/units integrated by the computer apparatus 200, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the embodiments provided in the present invention, it should be understood that the disclosed computer apparatus and method can be implemented in other ways. For example, the above-described embodiments of the computer apparatus are merely illustrative, and for example, the division of the units is only one logical function division, and there may be other divisions when the actual implementation is performed.
In addition, functional units in the embodiments of the present invention may be integrated into the same processing unit, or each unit may exist alone physically, or two or more units are integrated into the same unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The units or computer means recited in the computer means claims may also be implemented by the same unit or computer means, either in software or in hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A photocuring 3D printing method, comprising:
carrying out layering processing on the three-dimensional model to obtain a plurality of section layers, and acquiring the outline information of each section layer;
selecting a line segment formed by two adjacent edge contour points from the section layer, determining pixel points through which the line segment passes, and calculating the area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points;
determining the gray value of the inner area according to the area ratio;
when a gray map is generated based on the profile information of the cross-section layer, the gray value of the inner region is modified to the gray map.
2. The photocuring 3D printing method of claim 1, further comprising:
and modeling the object to be 3D printed to obtain the three-dimensional model.
3. The photocuring 3D printing method of claim 1 or 2, wherein the step of determining the pixel points through which the line segment passes comprises:
acquiring coordinates of two edge contour points according to the contour information of the cross section layer;
calculating the slope of the line segment based on the coordinates of the two edge contour points;
and determining pixel points through which the line segment passes according to the slope and the coordinates of the two edge contour points.
4. The photocuring 3D printing method according to claim 1 or 2, wherein the step of calculating an area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points includes:
acquiring all line segments passing through the pixel points;
and calculating the area ratio of an inner area formed by dividing the pixel points together by all the line segments to the pixel points.
5. The photocuring 3D printing method according to claim 1 or 2, wherein the gray value of the inner area is calculated by the following calculation formula: 255 (S1/S2), wherein S1 is the area of the inner region, and S2 is the area of the pixel points.
6. The photocuring 3D printing method of claim 1 or 2, wherein the step of modifying the grayscale value of the inner region to the grayscale map comprises:
modifying the gray value of the pixel region corresponding to the inner region on the gray map into the gray value of the inner region; or
And modifying the gray value of the pixel point corresponding to the inner area on the gray map into the gray value of the inner area.
7. The photocuring 3D printing method according to claim 1, wherein the step of calculating an area ratio of an inner area formed by dividing the pixel points by the line segment to the pixel points is preceded by:
and connecting the line segments formed by two adjacent edge contour points end to form a closed contour graph.
8. A3D printer, characterized in that the 3D printer performs the steps of the photocuring 3D printing method according to any one of claims 1 to 7.
9. A computer arrangement comprising a processor and a memory, the memory having stored thereon a number of computer programs, wherein the processor is configured to control a 3D printer to perform the steps of the photocuring 3D printing method as claimed in any one of claims 1 to 7 when executing the computer programs stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, controls a 3D printer to carry out the steps of the photocuring 3D printing method according to any one of claims 1 to 7.
CN202011596912.1A 2020-12-28 2020-12-28 Photocuring 3D printing method, 3D printer, computer device and medium Pending CN112743851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011596912.1A CN112743851A (en) 2020-12-28 2020-12-28 Photocuring 3D printing method, 3D printer, computer device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011596912.1A CN112743851A (en) 2020-12-28 2020-12-28 Photocuring 3D printing method, 3D printer, computer device and medium

Publications (1)

Publication Number Publication Date
CN112743851A true CN112743851A (en) 2021-05-04

Family

ID=75646797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011596912.1A Pending CN112743851A (en) 2020-12-28 2020-12-28 Photocuring 3D printing method, 3D printer, computer device and medium

Country Status (1)

Country Link
CN (1) CN112743851A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409222A (en) * 2021-06-30 2021-09-17 深圳市纵维立方科技有限公司 Image processing method, printing-related apparatus, and readable storage medium
CN114379095A (en) * 2021-12-07 2022-04-22 宁波智造数字科技有限公司 Method for correcting n-butanol phenomenon in photocuring 3D printing
CN114407364A (en) * 2021-12-31 2022-04-29 深圳市纵维立方科技有限公司 Three-dimensional model slicing method, printing system and electronic equipment
CN115187469A (en) * 2022-06-02 2022-10-14 深圳市纵维立方科技有限公司 Image processing method and device in 3D printing, storage medium and terminal
CN116061563A (en) * 2023-03-07 2023-05-05 苏州希盟科技股份有限公司 Ink-jet printing method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112008980A (en) * 2020-02-24 2020-12-01 清锋(北京)科技有限公司 3D printing model processing method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112008980A (en) * 2020-02-24 2020-12-01 清锋(北京)科技有限公司 3D printing model processing method and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409222A (en) * 2021-06-30 2021-09-17 深圳市纵维立方科技有限公司 Image processing method, printing-related apparatus, and readable storage medium
CN114379095A (en) * 2021-12-07 2022-04-22 宁波智造数字科技有限公司 Method for correcting n-butanol phenomenon in photocuring 3D printing
CN114379095B (en) * 2021-12-07 2024-03-05 宁波智造数字科技有限公司 Method for correcting Tyndall phenomenon in photo-curing 3D printing
CN114407364A (en) * 2021-12-31 2022-04-29 深圳市纵维立方科技有限公司 Three-dimensional model slicing method, printing system and electronic equipment
CN114407364B (en) * 2021-12-31 2023-10-24 深圳市纵维立方科技有限公司 Slicing method, printing system and electronic equipment of three-dimensional model
CN115187469A (en) * 2022-06-02 2022-10-14 深圳市纵维立方科技有限公司 Image processing method and device in 3D printing, storage medium and terminal
CN116061563A (en) * 2023-03-07 2023-05-05 苏州希盟科技股份有限公司 Ink-jet printing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112743851A (en) Photocuring 3D printing method, 3D printer, computer device and medium
CN107958460B (en) Instance-level semantic segmentation system
US20230081400A1 (en) Enhanced three dimensional printing of vertical edges
US20070208464A1 (en) System and method of interactively compiling a database for an in-vehicle display device
US20180285535A1 (en) Digital Image Processing including Refinement Layer, Search Context Data, or DRM
US10737437B2 (en) Method of compensating for inhibitor permeable film deformation in the manufacture of three-dimensional objects
US20180286023A1 (en) Digital Image Processing through use of an Image Repository
CN107608957A (en) Text modification method, apparatus and its equipment based on voice messaging
CN107206676A (en) Utilize the structure of three-dimensional halftone process
US11693873B2 (en) Systems and methods for using entity/relationship model data to enhance user interface engine
Parisi et al. Soft thought (in architecture and choreography)
EP3298587A1 (en) Multiscale 3d texture synthesis
CN111191161B (en) Page display method, storage medium, electronic device and system
CN112172155A (en) Edge softening method and device for 3D printing, storage medium and 3D printer
CN115631282A (en) Method and system for drawing point cloud three-dimensional continuous Bessel curve and storage medium
US11370165B2 (en) Method for improving resolution in LCD screen based 3D printers
CN114693532A (en) Image correction method and related equipment
US9911229B2 (en) Transmission and configuration of three dimensional digital content
CN114417617A (en) Nested word model generation method and device, electronic equipment and readable storage medium
JP7428303B1 (en) Characteristic prediction device, characteristic prediction method and program
CN111310433B (en) Lock line binding makeup method, readable storage medium and computer equipment
JP4474727B2 (en) Product promotion image creation method, promotion image display method, promotion image creation device, and program recording medium
CN110612193A (en) Correlating a print coverage matrix with an object attribute matrix
CN116206089A (en) Control method, device and equipment for model making
CN114442967A (en) 3D model printing display method, 3D printer, computer device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210504