CN113393567A - 3D printing method and device, computer equipment and storage medium - Google Patents

3D printing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113393567A
CN113393567A CN202110604070.8A CN202110604070A CN113393567A CN 113393567 A CN113393567 A CN 113393567A CN 202110604070 A CN202110604070 A CN 202110604070A CN 113393567 A CN113393567 A CN 113393567A
Authority
CN
China
Prior art keywords
projection
pixel point
surface pixel
distance
projection screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110604070.8A
Other languages
Chinese (zh)
Other versions
CN113393567B (en
Inventor
敖丹军
刘辉林
刘洪�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chuangxiang 3D Technology Co Ltd
Original Assignee
Shenzhen Chuangxiang 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chuangxiang 3D Technology Co Ltd filed Critical Shenzhen Chuangxiang 3D Technology Co Ltd
Priority to CN202110604070.8A priority Critical patent/CN113393567B/en
Publication of CN113393567A publication Critical patent/CN113393567A/en
Application granted granted Critical
Publication of CN113393567B publication Critical patent/CN113393567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)

Abstract

The application relates to a 3D printing method, a device, a computer device and a storage medium. The method is applied to a 3D printing system and comprises the following steps: the method comprises the steps that a processing device in a 3D printing system conducts slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model, the processing device conducts correction processing on each slice image in the plurality of slice images according to a preset mapping relation representing the corresponding relation between an original projection position of a projection beam of a projection device before the projection beam penetrates through a projection screen and an offset projection position of the projection beam after the projection beam penetrates through the projection screen to obtain a plurality of corrected slice images, the processing device sends the corrected slice images to the projection device to enable the projection beam of the projection device to sequentially project the corrected slice images, and the object to be printed is formed on the surface of the projection screen. The method can enable the generated object to be printed to be the same as or closer to the actual object to be printed in size, and further improves the printing precision.

Description

3D printing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of 3D printing technologies, and in particular, to a 3D printing method and apparatus, a computer device, and a storage medium.
Background
Digital Light Processing (DLP) is an image processing technique for projecting Light after Digital processing of a video signal.
DLP printing comprises the following steps: and generating a 3D model of the object to be printed, slicing the 3D model to obtain a plurality of slice images of the 3D model, sequentially projecting the slice images to a projection screen through projection equipment, and solidifying and molding photosensitive resin on the other side of the projection screen layer by layer to finally obtain the object to be printed.
Because the projection screen has a certain thickness and the projection light source of the projection device is not a parallel light source, an error is generated when the slice image is projected to one side surface of the projection screen and then reaches the other side surface (the photosensitive resin side), so that the slice image formed on the photosensitive resin side of the projection screen is larger than the actual slice image, and the slice image is accumulated layer by layer, so that the generated object to be printed is larger than the actual object to be printed, and the printing precision is further reduced.
Disclosure of Invention
In view of the above, it is necessary to provide a 3D printing method, apparatus, computer device and storage medium to address the above technical problems.
A3D printing method is applied to a 3D printing system, the 3D printing system comprises a processing device, a projection device and a projection screen, and the method is characterized by comprising the following steps:
the method comprises the steps that a processing device conducts slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model;
the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images; the mapping relation is used for representing the corresponding relation between an original projection position of a projection beam of the projection equipment before passing through the projection screen and an offset projection position after passing through the projection screen;
the processing device sends the corrected slice images to the projection device so that the projection beams of the projection device sequentially project the corrected slice images and form an object to be printed on the surface of the projection screen.
In one embodiment, the modifying each of the plurality of slice images according to a preset mapping relationship to obtain a plurality of modified slice images includes:
determining the actual position of a contour pixel point corresponding to the contour of each slice image;
taking the actual position of the contour pixel point as an offset projection position, and determining an original projection position corresponding to the contour pixel point according to the mapping relation;
and determining a corrected image contour according to the original projection position corresponding to the contour pixel point, and generating a corrected slice image corresponding to the image contour according to the corrected image contour.
In one embodiment, the original projection position is the coordinate of the first surface pixel point, and the offset projection position is the coordinate of the second surface pixel point;
the first surface pixel points are pixel points corresponding to a projection area formed by the projection light beams on the first surface of the projection screen, and the second surface pixel points are pixel points corresponding to a projection area formed by the same projection light beams on the second surface of the projection screen; the first surface is close to a projection light source emitting a projection light beam, and the second surface is far away from the projection light source.
In one embodiment, the process of determining a mapping relationship includes:
obtaining the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen and the thickness of the projection screen;
and generating a mapping relation according to the corresponding relation between the coordinate of each first surface pixel point and the coordinate of the corresponding second surface pixel point.
In one embodiment, obtaining the coordinates of the first surface pixel point corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen, and the thickness of the projection screen includes:
determining the distance between the second surface pixel point and the coordinate origin; wherein, the origin of coordinates is the surface central point of the projection screen;
determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen and the thickness of the projection screen;
and obtaining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the coordinates of the second surface pixel points, the distance between the second surface pixel points and the origin of coordinates and the projection distance.
In one embodiment, determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen, and the thickness of the projection screen includes:
multiplying the distance between the second surface pixel point and the coordinate origin and the thickness of the projection screen to obtain a first product;
and performing division operation processing on the first product and the distance between the projection light source and the projection screen to obtain the projection distances of the first surface pixel points and the second surface pixel points on the second surface.
In one embodiment, the obtaining of the coordinates of the first surface pixel point corresponding to the second surface pixel point according to the coordinates of the second surface pixel point, the distance between the second surface pixel point and the origin of coordinates, and the projection distance includes:
subtracting the first quotient from the x value of the second surface pixel point to obtain the corresponding x value of the first surface pixel point; the first quotient value is the quotient of the product of the x value and the projection distance of the second surface pixel point and the distance between the second surface pixel point and the coordinate origin;
subtracting the second quotient from the y value of the second surface pixel point to obtain the corresponding y value of the first surface pixel point; and the second quotient value is the quotient of the product of the y value of the second surface pixel point and the projection distance and the distance between the second surface pixel point and the coordinate origin.
A3D printing device is applied to a 3D printing system, the 3D printing system comprises a processing device, a projection device and a projection screen, and the device comprises:
the model slicing module is used for carrying out slicing processing on the 3D model of the object to be printed through the processing equipment to obtain a plurality of slice images of the 3D model;
the contour correction module is used for correcting the contour of each slice image in the plurality of slice images according to a preset mapping relation through processing equipment to obtain a plurality of corrected slice images; the mapping relation is used for representing the mapping relation between an original projection position of a projection beam of the projection equipment before the projection beam passes through the projection screen and an offset projection position of the projection beam after the projection beam passes through the projection screen;
and the projection molding module is used for sending the corrected slice images to the projection equipment through the processing equipment so that the projection beams of the projection equipment sequentially project the corrected slice images and form the object to be printed on the surface of the projection screen.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
the method comprises the steps that a processing device conducts slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model;
the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images; the mapping relation is used for representing the corresponding relation between an original projection position of a projection beam of the projection equipment before passing through the projection screen and an offset projection position after passing through the projection screen;
the processing device sends the corrected slice images to the projection device so that the projection beams of the projection device sequentially project the corrected slice images and form an object to be printed on the surface of the projection screen.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
the method comprises the steps that a processing device conducts slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model;
the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images; the mapping relation is used for representing the corresponding relation between an original projection position of a projection beam of the projection equipment before passing through the projection screen and an offset projection position after passing through the projection screen;
the processing device sends the corrected slice images to the projection device so that the projection beams of the projection device sequentially project the corrected slice images and form an object to be printed on the surface of the projection screen.
According to the 3D printing method, the device, the computer equipment and the storage medium, the processing equipment in the 3D printing system performs slicing processing on the 3D model of the object to be printed to obtain a plurality of slice images of the 3D model, the processing equipment performs correction processing on each slice image in the plurality of slice images according to the preset mapping relation representing the corresponding relation between the original projection position of the projection beam of the projection equipment before passing through the projection screen and the offset projection position after passing through the projection screen to obtain a plurality of corrected slice images, and the processing equipment sends the corrected slice images to the projection equipment so that the projection beam of the projection equipment sequentially projects the corrected slice images and forms the object to be printed on the surface of the projection screen. Since the mapping relationship between the original projection position of the projection beam of the projection device before passing through the projection screen and the offset projection position of the projection beam after passing through the projection screen is predetermined, the slice image can be corrected according to the mapping relationship before the projection device projects the slice image of the 3D model onto the projection screen to obtain a corrected slice image, so that the corrected slice image is offset after passing through the projection screen, the slice image before correction is formed on the surface of the photosensitive resin side of the projection screen, and the size of the model in the slice image before correction is the same as that of the 3D model of the object to be printed, therefore, the object to be printed generated based on the slice image before correction formed on the surface of the photosensitive resin side of the projection screen is the same as or closer to the actual object to be printed, and the printing precision is improved.
Drawings
FIG. 1 is a diagram of an application environment of a 3D printing method in one embodiment;
FIG. 2 is a schematic flow chart of a 3D printing method in one embodiment;
FIG. 3 is a schematic flow chart of obtaining a corrected slice image in one embodiment;
FIG. 4 is a schematic diagram of a process for determining a mapping relationship in another embodiment;
FIG. 5 is a schematic diagram of an embodiment of a projection light source configured to form first surface pixels and second surface pixels on a projection screen;
FIG. 6 is a schematic flow chart illustrating the process of determining coordinates of first surface pixels corresponding to second surface pixels in one embodiment;
FIG. 7 is a schematic diagram of an interface of a second surface of a projection screen in one embodiment;
FIG. 8 is a flowchart illustrating the process of determining the projection distances of the first surface pixels and the second surface pixels on the second surface according to one embodiment;
FIG. 9 is a schematic flow chart illustrating the process of determining coordinates of first surface pixels corresponding to second surface pixels in another embodiment;
FIG. 10 is a block diagram showing a configuration of a 3D printing apparatus according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The 3D printing system 100, as shown in fig. 1, includes a processing device 101, a projection device 102, and a projection screen 103. When 3D printing is performed by using the 3D printing system 100 shown in fig. 1, a 3D model of an object to be printed is generated by the processing device 101, the 3D model is sliced to obtain a plurality of slice images of the 3D model, the processing device 101 sends the obtained slice images to the projection device 102, the projection device 102 projects the slice images to the projection screen 103 in sequence by using an internal projection light source, and the photosensitive resin on the other side of the projection screen 103 is cured and molded layer by layer to obtain the object to be printed. However, since the projection light source in the projection device 102 is generally a point light source and the projection screen 103 has a certain thickness, the projection light beams projected by the projection light source on both sides of the projection screen 103 are shifted after passing through the projection screen 103, and the shift is more severe at a position farther from the projection screen 103, which causes a larger slice image to be formed on the photosensitive resin side of the projection screen 103 than an actual slice image, and the slice images are accumulated layer by layer, so that an object to be printed is also larger than the actual object to be printed, thereby reducing the printing accuracy.
The 3D printing method provided by the application can be applied to a 3D printing system shown in FIG. 1. The processing device 101 slices a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model; the processing device 101 corrects each slice image of the plurality of slice images according to a preset mapping relationship to obtain a plurality of corrected slice images; the mapping relationship is used to represent a mapping relationship between an original projection position of the projection beam of the projection device 102 before passing through the projection screen 103 and an offset projection position after passing through the projection screen 103. The processing device 101 sends the corrected slice image to the projection device 102 so that the projection beam of the projection device 102 sequentially projects the corrected slice image and forms an object to be printed on the surface of the projection screen. The processing device 101 may be, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices, among others.
In one embodiment, as shown in fig. 2, a 3D printing method is provided, which is described by taking the method as an example applied to the 3D printing system in fig. 1, and includes the following steps:
s210, the processing equipment performs slicing processing on the 3D model of the object to be printed to obtain a plurality of slice images of the 3D model.
Wherein the slice images of the 3D model are used to characterize the 3D model slice cross-sectional images. For example, the slice image of the spherical 3D model is a circular image; the slice image of the cylindrical 3D model is either a circular image (slice along the horizontal axis) or a rectangular image (slice along the vertical axis).
Optionally, the processing device forms a 3D model of the object to be printed based on 3D modeling software, and slices the 3D model using model slicing software to obtain a plurality of slice images of the 3D model. The 3D modeling software and the model slicing software may be existing model processing software for the purpose of modeling and slicing, and are not limited specifically herein.
Alternatively, the processing device may take one end of the 3D model as a starting point, and slice the 3D model toward the other end of the 3D model with a preset thickness to obtain a plurality of slice images of the 3D model. And the central point of the 3D model can be used as a starting point and faces to the two ends of the 3D model, slicing is carried out according to a preset thickness, and a plurality of slice images of the 3D model are obtained. Alternatively, the processing device may slice along the longitudinal axis of the 3D model, and may also slice along the lateral axis of the 3D model.
Specifically, in this embodiment, the processing device takes one end of the 3D model as a starting point, faces the other end of the 3D model, for example, the bottom of the 3D model as a starting point, faces the top of the 3D model, and slices along the horizontal axis of the 3D model by using a preset thickness, so as to obtain a plurality of slice images of the 3D model.
S220, the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images.
The mapping relation is used for representing the mapping relation between an original projection position of a projection light beam of the projection equipment before passing through the projection screen and an offset projection position after passing through the projection screen.
Alternatively, before performing the 3D printing method provided by the present application, the preset projection relationship may be determined by using a geometrical relationship existing between the projection light source, a surface center point of the projection screen, an original projection position of the projection beam emitted by the projection light source before passing through the projection screen, and an offset projection position after passing through the projection screen. The predetermined projection relationship may also be determined by actual measurement.
Optionally, taking a central point of the slice image as a coordinate origin, the processing device obtains an actual coordinate of each pixel point corresponding to each slice image in the plurality of slice images, and takes the actual coordinate as an offset projection position in the mapping relationship, so as to determine an original projection position corresponding to the actual coordinate of each pixel point corresponding to each slice image according to the mapping relationship, and form a corrected slice image after correction by the pixel point corresponding to the original projection position in each slice image.
And S230, sending the corrected slice images to the projection equipment by the processing equipment, enabling the projection beams of the projection equipment to sequentially project the corrected slice images, and forming the object to be printed on the surface of the projection screen.
Specifically, the processing device sends the obtained corrected slice image to the projection device, the projection device sends projection beams to the projection screen through the projection light source so as to sequentially project the corrected slice image to the projection screen, after the projected corrected slice image is subjected to pixel point deviation through the projection screen, the slice image before correction is formed on the photosensitive resin side of the projection screen, and then the photosensitive resin on the photosensitive resin side of the projection screen is subjected to light curing layer by layer so as to form an object to be printed on the surface of the photosensitive resin side of the projection screen.
In this embodiment, a processing device in a 3D printing system performs slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model, the processing device performs correction processing on each of the plurality of slice images according to a preset mapping relationship representing a correspondence between an original projection position of a projection beam of a projection device before passing through a projection screen and an offset projection position after passing through the projection screen to obtain a plurality of corrected slice images, and the processing device sends the corrected slice images to the projection device, so that the projection beam of the projection device sequentially projects the corrected slice images, and the object to be printed is formed on the surface of the projection screen. Since the mapping relationship between the original projection position of the projection beam of the projection device before passing through the projection screen and the offset projection position of the projection beam after passing through the projection screen is predetermined, the slice image can be corrected according to the mapping relationship before the projection device projects the slice image of the 3D model onto the projection screen to obtain a corrected slice image, so that the corrected slice image is offset after passing through the projection screen, the slice image before correction is formed on the surface of the photosensitive resin side of the projection screen, and the size of the model in the slice image before correction is the same as that of the 3D model of the object to be printed, therefore, the object to be printed generated based on the slice image before correction formed on the surface of the photosensitive resin side of the projection screen is the same as or closer to the actual object to be printed, and the printing precision is improved.
In one embodiment, to improve the 3D printing efficiency, as shown in fig. 3, the step S220 includes:
and S310, determining the actual position of the contour pixel point corresponding to the contour of each slice image.
Wherein the contour of the slice image refers to the outermost circle of the slice image without including the middle region. If the slice image is a circular image, the contour of the corresponding slice image refers to a circle at the outermost periphery of the circular image, the slice image is a rectangular image, and the contour of the corresponding slice image refers to a rectangle at the outermost periphery of the rectangular image.
Specifically, the processing device takes the center point of the slice image as the origin of coordinates, and extracts the actual coordinates of the contour pixel points corresponding to the contour of each slice image as the actual positions of the contour pixel points. For example, the slice image is a circular image, and the processing device extracts the actual coordinates of the pixel points corresponding to a circle at the outermost periphery of the circular image as the actual positions of the pixel points.
And S320, taking the actual position of the contour pixel point as an offset projection position, and determining the original projection position corresponding to the contour pixel point according to the mapping relation.
S330, determining a corrected image contour according to the original projection position corresponding to the contour pixel point, and generating a corrected slice image corresponding to the image contour according to the corrected image contour.
Specifically, the processing device uses the obtained actual position of the contour pixel point as an offset projection position in the mapping relationship, so as to determine an original projection position corresponding to the actual position of each contour pixel point corresponding to each slice image according to the mapping relationship, form a corrected image contour by the pixel point at the original projection position corresponding to the contour pixel point in each slice image, and further generate a corrected slice image corresponding to the image contour according to the corrected image contour.
In this embodiment, the processing device determines the actual position of the contour pixel point corresponding to the contour of each slice image, uses the actual position of the contour pixel point as an offset projection position, determines the original projection position corresponding to the contour pixel point according to the mapping relationship, determines the corrected image contour according to the original projection position corresponding to the contour pixel point, and further generates a corrected slice image corresponding to the image contour according to the corrected image contour. In the correction process, the actual positions of the contour pixel points corresponding to the contour of the slice image are corrected without performing pixel points in the whole slice image, and the size of the formed model can be determined by the contour of the image, so that the data volume of the pixel points subjected to correction processing can be greatly reduced by adopting the correction mode, and the efficiency of the whole 3D printing is improved by improving the correction efficiency.
In one embodiment, as shown in FIG. 1, the projection screen 103 includes a first surface and a second surface disposed opposite to each other, wherein the first surface is close to a projection light source emitting a projection light beam, and the second surface is far from the projection light source and close to the photosensitive resin. The original projection position is the coordinate of the first surface pixel point, and the offset projection position is the coordinate of the second surface pixel point. The first surface pixel points are pixel points corresponding to projection areas formed by projection beams on the first surface of the projection screen, and the second surface pixel points are pixel points corresponding to projection areas formed by the same projection beams on the second surface of the projection screen.
As shown in fig. 4, in order to further improve the efficiency of 3D printing, the process of determining the mapping relationship includes:
s410, obtaining the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen and the thickness of the projection screen.
Optionally, for each second surface pixel point, the projection distance between the first surface pixel point and the corresponding second surface pixel point on the same surface of the projection screen is obtained by using the surface center point of the projection screen, the projection light source, the similar characteristic of the similar triangle in the triangle formed by the first surface pixel point and the corresponding second surface pixel point of the same projection light beam on the projection screen. And determining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the geometric relationship between the first surface pixel points and the corresponding second surface pixel points on the same surface of the projection screen.
For example, as shown in fig. 5, the projection distance is obtained by using the similarity between the first triangle formed by the surface center point O2 of the second surface of the projection screen 103, the projection light source P and the second surface pixel point F2 and the second triangle formed by the surface center point O1 of the first surface of the projection screen 103, the projection light source P and the first surface pixel point F1. Or, the projection distance is obtained by using the similarity between the first triangle constructed by the surface center point O2 of the second surface of the projection screen 103, the projection light source P and the second surface pixel point F2 and the third triangle constructed by the first surface pixel point F1, the corresponding second surface pixel point F2 and the projection pixel point F1' of the first surface pixel point F1 on the second surface.
And S420, generating a mapping relation according to the corresponding relation between the coordinate of each first surface pixel point and the coordinate of the corresponding second surface pixel point.
Specifically, after determining the coordinates of each first surface pixel point and the coordinates of the corresponding second surface pixel point, the processing device generates a mapping relationship including all the second surface pixel points on the projection screen according to the correspondence between the coordinates of each first surface pixel point and the coordinates of the corresponding second surface pixel point.
In this embodiment, the processing device obtains the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen, and the thickness of the projection screen, and then generates the mapping relationship according to the correspondence between the coordinates of each first surface pixel point and the coordinates of the corresponding second surface pixel point, and the processing program is determined in advance, and when the mapping relationship is generated, the mapping relationship can be obtained only by obtaining the coordinates of the second surface pixel points, the distance between the projection light source and the projection screen, and the three parameters of the thickness of the projection screen, so that the processing efficiency is high, and further the efficiency of 3D printing is improved.
In one embodiment, as shown in fig. 6, the above S410 includes:
and S610, determining the distance between the second surface pixel point and the coordinate origin.
Wherein, the origin of coordinates is the surface central point of the projection screen.
Specifically, for each second surface pixel point on the projection screen, the processing device obtains a distance between each second surface pixel point and a surface center point of the projection screen. Taking the second surface pixel F2(x, y) in fig. 5 as an example, the processing device obtains the distance l between the second surface pixel F2(x, y) and the surface center point O2 corresponding to the second surface of the projection screen 103,
Figure BDA0003093607200000111
s620, determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen and the thickness of the projection screen.
Optionally, as shown in fig. 5, the processing device obtains the projection distance by using similarity between a first triangle constructed by the surface center point O2 of the second surface of the projection screen, the projection light source P, and the second surface pixel point F2, and a third triangle constructed by the first surface pixel point F1, the corresponding second surface pixel point F2, and the first surface pixel point F1 at the projection pixel point F1' of the second surface. For example, the projection distance s can be obtained according to P, l and t by using the same ratio of the adjacent two sides of the similar triangle, that is, the ratio of the distance P between the projection light source P and the projection screen to the distance l between the second surface pixel point F2 and the coordinate origin O2, and the ratio of the thickness t of the projection screen to the projection distance s on the second surface of the first surface pixel point F1 and the second surface pixel point F2 (P/l is t/s).
S630, obtaining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the coordinates of the second surface pixel points, the distance between the second surface pixel points and the coordinate origin and the projection distance.
Optionally, as shown in fig. 7, on the second surface of the projection screen, a coordinate system is constructed with a surface center point O2 of the second surface as an origin of coordinates, a horizontal direction as an x axis, and a vertical direction as a y axis, and the processing device obtains coordinates of a first surface pixel point F1 corresponding to a second surface pixel point F2 by using similarities between a fourth triangle constructed by a straight line l1 of a second surface pixel point F2, a surface center point O2 of the second surface, and a foot T1 on the x axis, and a fifth triangle constructed by a straight line l2 passing through a projection point F6324, a projection point F1' of the first surface pixel point F1 on the second surface, and a foot T2 of the l 1. Regardless of the z-axis direction, the coordinates (xp, yp) of the first surface pixel point F1 are the same as the coordinates of the pixel point F1'.
For example, by using the same ratio between the adjacent two sides of the similar triangle, that is, the ratio between the distance x between the surface center point O2 and the foot T1 and the distance l between the second surface pixel F2 and the surface center point O2, and the ratio between the distance x-xp between the projection point F1' and the foot T2 and the projection distance s between the first surface pixel F1 and the second surface pixel F2 on the second surface are equal (x/l ═ x-xp/s), the abscissa xp of the first surface pixel F1 corresponding to the second surface pixel F2 can be obtained according to x, l, s. And the ratio of the distance y between the second surface pixel point F2 and the foot T1 to the distance l between the second surface pixel point F2 and the surface center point O2 is equal to the ratio of the distance y-yp between the second surface pixel point F2 and the foot T2 to the projection distance s of the first surface pixel point F1 and the second surface pixel point F2 on the second surface (y/l is y-yp/s), and then the longitudinal coordinate yp of the first surface pixel point F1 corresponding to the second surface pixel point F2 can be obtained according to y, l, s.
In an alternative embodiment, as shown in fig. 8, the process of determining the projection distance s includes:
and S810, multiplying the distance between the second surface pixel point and the coordinate origin and the thickness of the projection screen to obtain a first product.
Specifically, the processing device multiplies the distance l between the second surface pixel point F2 and the origin of coordinates and the thickness t of the projection screen to obtain a first product, i.e., l × t.
S820, the first product and the distance between the projection light source and the projection screen are subjected to division operation processing, and the projection distance of the first surface pixel point and the second surface pixel point on the second surface is obtained.
Specifically, the processing device performs division operation on the first product l × t and the distance P between the projection light source P and the projection screen to obtain a projection distance s between the first surface pixel point and the second surface pixel point on the second surface, that is, s ═ l × t/P.
In an optional embodiment, as shown in fig. 9, the coordinates of the second surface pixel point include an x value and a y value, and the coordinates of the corresponding first surface pixel point also include an x value and a y value, and determining the coordinates of the first surface pixel point F1 corresponding to the second surface pixel point F2 includes:
s910, subtracting the first quotient from the x value of the second surface pixel point to obtain the corresponding x value of the first surface pixel point.
The first quotient is a quotient of a product of an x value and a projection distance of the second surface pixel point and a distance between the second surface pixel point and a coordinate origin, namely (x s)/l.
Specifically, the abscissa xp of the first surface pixel F1 corresponding to the second surface pixel F2 is x- (x × s)/l.
S920, subtracting the second quotient from the y value of the second surface pixel point to obtain the corresponding y value of the first surface pixel point.
The second quotient is a quotient of a product of the y value of the second surface pixel point and the projection distance and a distance between the second surface pixel point and the origin of coordinates, namely (y s)/l.
Specifically, the abscissa yp of the first surface pixel F1 corresponding to the second surface pixel F2 is y- (y × s)/l.
In this embodiment, the processing device determines a similar triangle by using the geometric relationship between the first surface pixel point formed on the first surface of the projection screen by the projection light source, the surface center point of the projection screen, and the corresponding second surface pixel point formed on the second surface of the projection screen by the same projection light beam, and calculates the coordinate of the first surface pixel point corresponding to each second surface pixel point according to the similar characteristics of the similar triangle.
It should be understood that although the various steps in the flow charts of fig. 2-9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-9 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 10, there is provided a 3D printing apparatus applied to a 3D printing system, the 3D printing system including a processing device, a projection device, and a projection screen, the apparatus including: a model slicing module 1001, a contour correction module 1002, and a projection modeling module 1003, wherein:
the model slicing module 1001 is configured to perform slicing processing on a 3D model of an object to be printed through a processing device to obtain a plurality of slice images of the 3D model;
the contour correction module 1002 is configured to perform correction processing on a contour of each of the multiple slice images according to a preset mapping relationship by using a processing device, so as to obtain multiple corrected slice images; the mapping relation is used for representing the mapping relation between an original projection position of a projection beam of the projection equipment before the projection beam passes through the projection screen and an offset projection position of the projection beam after the projection beam passes through the projection screen;
the projection molding module 1003 is used for sending the corrected slice image to the projection device through the processing device, so that the projection beam of the projection device sequentially projects the corrected slice image and forms the object to be printed on the surface of the projection screen.
In one embodiment, the model slicing module 1001 is specifically configured to:
determining the actual position of a contour pixel point corresponding to the contour of each slice image; taking the actual position of the contour pixel point as an offset projection position, and determining an original projection position corresponding to the contour pixel point according to the mapping relation; and determining a corrected image contour according to the original projection position corresponding to the contour pixel point, and generating a corrected slice image corresponding to the image contour according to the corrected image contour.
In one embodiment, the original projection position is the coordinate of the first surface pixel point, and the offset projection position is the coordinate of the second surface pixel point; the first surface pixel points are pixel points corresponding to a projection area formed by the projection light beams on the first surface of the projection screen, and the second surface pixel points are pixel points corresponding to a projection area formed by the same projection light beams on the second surface of the projection screen; the first surface is close to a projection light source emitting a projection light beam, and the second surface is far away from the projection light source.
In one embodiment, the contour modification module 1002 is specifically configured to:
obtaining the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen and the thickness of the projection screen; and generating a mapping relation according to the corresponding relation between the coordinate of each first surface pixel point and the coordinate of the corresponding second surface pixel point.
In one embodiment, the contour modification module 1002 is specifically configured to:
determining the distance between the second surface pixel point and the coordinate origin; wherein, the origin of coordinates is the surface central point of the projection screen; determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen and the thickness of the projection screen; and obtaining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the coordinates of the second surface pixel points, the distance between the second surface pixel points and the origin of coordinates and the projection distance.
In one embodiment, the contour modification module 1002 is specifically configured to:
multiplying the distance between the second surface pixel point and the coordinate origin and the thickness of the projection screen to obtain a first product; and performing division operation processing on the first product and the distance between the projection light source and the projection screen to obtain the projection distances of the first surface pixel points and the second surface pixel points on the second surface.
In one embodiment, the coordinates of the second surface pixel point include an x value and a y value, and the corresponding coordinates of the first surface pixel point also include an x value and a y value, and the contour correction module 1002 is specifically configured to:
subtracting the first quotient from the x value of the second surface pixel point to obtain the corresponding x value of the first surface pixel point; the first quotient value is the quotient of the product of the x value and the projection distance of the second surface pixel point and the distance between the second surface pixel point and the coordinate origin; subtracting the second quotient from the y value of the second surface pixel point to obtain the corresponding y value of the first surface pixel point; and the second quotient value is the quotient of the product of the y value of the second surface pixel point and the projection distance and the distance between the second surface pixel point and the coordinate origin.
For specific limitations of the 3D printing apparatus, reference may be made to the above limitations of the 3D printing method, which are not described herein again. The respective modules in the 3D printing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a 3D printing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
the method comprises the steps that a processing device conducts slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model; the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images; the mapping relation is used for representing the corresponding relation between an original projection position of a projection beam of the projection equipment before passing through the projection screen and an offset projection position after passing through the projection screen; the processing device sends the corrected slice images to the projection device so that the projection beams of the projection device sequentially project the corrected slice images and form an object to be printed on the surface of the projection screen.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the actual position of a contour pixel point corresponding to the contour of each slice image; taking the actual position of the contour pixel point as an offset projection position, and determining an original projection position corresponding to the contour pixel point according to the mapping relation; and determining a corrected image contour according to the original projection position corresponding to the contour pixel point, and generating a corrected slice image corresponding to the image contour according to the corrected image contour.
In one embodiment, the original projection position is the coordinate of the first surface pixel point, and the offset projection position is the coordinate of the second surface pixel point; the first surface pixel points are pixel points corresponding to a projection area formed by the projection light beams on the first surface of the projection screen, and the second surface pixel points are pixel points corresponding to a projection area formed by the same projection light beams on the second surface of the projection screen; the first surface is close to a projection light source emitting a projection light beam, and the second surface is far away from the projection light source.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen and the thickness of the projection screen; and generating a mapping relation according to the corresponding relation between the coordinate of each first surface pixel point and the coordinate of the corresponding second surface pixel point.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the distance between the second surface pixel point and the coordinate origin; wherein, the origin of coordinates is the surface central point of the projection screen; determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen and the thickness of the projection screen; and obtaining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the coordinates of the second surface pixel points, the distance between the second surface pixel points and the origin of coordinates and the projection distance.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
multiplying the distance between the second surface pixel point and the coordinate origin and the thickness of the projection screen to obtain a first product; and performing division operation processing on the first product and the distance between the projection light source and the projection screen to obtain the projection distances of the first surface pixel points and the second surface pixel points on the second surface.
In one embodiment, the coordinates of the second surface pixel point include an x value and a y value, and the corresponding coordinates of the first surface pixel point also include an x value and a y value, and the processor further implements the following steps when executing the computer program:
subtracting the first quotient from the x value of the second surface pixel point to obtain the corresponding x value of the first surface pixel point; the first quotient value is the quotient of the product of the x value and the projection distance of the second surface pixel point and the distance between the second surface pixel point and the coordinate origin; subtracting the second quotient from the y value of the second surface pixel point to obtain the corresponding y value of the first surface pixel point; and the second quotient value is the quotient of the product of the y value of the second surface pixel point and the projection distance and the distance between the second surface pixel point and the coordinate origin.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
the method comprises the steps that a processing device conducts slicing processing on a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model; the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images; the mapping relation is used for representing the corresponding relation between an original projection position of a projection beam of the projection equipment before passing through the projection screen and an offset projection position after passing through the projection screen; the processing device sends the corrected slice images to the projection device so that the projection beams of the projection device sequentially project the corrected slice images and form an object to be printed on the surface of the projection screen.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the actual position of a contour pixel point corresponding to the contour of each slice image; taking the actual position of the contour pixel point as an offset projection position, and determining an original projection position corresponding to the contour pixel point according to the mapping relation; and determining a corrected image contour according to the original projection position corresponding to the contour pixel point, and generating a corrected slice image corresponding to the image contour according to the corrected image contour.
In one embodiment, the original projection position is the coordinate of the first surface pixel point, and the offset projection position is the coordinate of the second surface pixel point; the first surface pixel points are pixel points corresponding to a projection area formed by the projection light beams on the first surface of the projection screen, and the second surface pixel points are pixel points corresponding to a projection area formed by the same projection light beams on the second surface of the projection screen; the first surface is close to a projection light source emitting a projection light beam, and the second surface is far away from the projection light source.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen and the thickness of the projection screen; and generating a mapping relation according to the corresponding relation between the coordinate of each first surface pixel point and the coordinate of the corresponding second surface pixel point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the distance between the second surface pixel point and the coordinate origin; wherein, the origin of coordinates is the surface central point of the projection screen; determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen and the thickness of the projection screen; and obtaining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the coordinates of the second surface pixel points, the distance between the second surface pixel points and the origin of coordinates and the projection distance.
In one embodiment, the computer program when executed by the processor further performs the steps of:
multiplying the distance between the second surface pixel point and the coordinate origin and the thickness of the projection screen to obtain a first product; and performing division operation processing on the first product and the distance between the projection light source and the projection screen to obtain the projection distances of the first surface pixel points and the second surface pixel points on the second surface.
In one embodiment, the coordinates of the second surface pixel point include an x value and a y value, and the corresponding coordinates of the first surface pixel point also include an x value and a y value, and the computer program when executed by the processor further implements the steps of:
subtracting the first quotient from the x value of the second surface pixel point to obtain the corresponding x value of the first surface pixel point; the first quotient value is the quotient of the product of the x value and the projection distance of the second surface pixel point and the distance between the second surface pixel point and the coordinate origin; subtracting the second quotient from the y value of the second surface pixel point to obtain the corresponding y value of the first surface pixel point; and the second quotient value is the quotient of the product of the y value of the second surface pixel point and the projection distance and the distance between the second surface pixel point and the coordinate origin.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A3D printing method is applied to a 3D printing system, the 3D printing system comprises a processing device, a projection device and a projection screen, and the method is characterized by comprising the following steps:
the processing equipment is used for slicing a 3D model of an object to be printed to obtain a plurality of slice images of the 3D model;
the processing equipment corrects each slice image in the slice images according to a preset mapping relation to obtain a plurality of corrected slice images; wherein the mapping relationship is used for representing the corresponding relationship between the original projection position of the projection light beam of the projection device before passing through the projection screen and the offset projection position after passing through the projection screen;
and the processing equipment sends the corrected slice images to the projection equipment, so that projection beams of the projection equipment sequentially project the corrected slice images, and the object to be printed is formed on the surface of the projection screen.
2. The method according to claim 1, wherein the performing a correction process on each of the plurality of slice images according to a preset mapping relationship to obtain a plurality of corrected slice images comprises:
determining the actual position of a contour pixel point corresponding to the contour of each slice image;
taking the actual position of the contour pixel point as an offset projection position, and determining an original projection position corresponding to the contour pixel point according to the mapping relation;
and determining a corrected image contour according to the original projection position corresponding to the contour pixel point, and generating a corrected slice image corresponding to the image contour according to the corrected image contour.
3. The method of claim 1, wherein the original projection location is a coordinate of a first surface pixel and the offset projection location is a coordinate of a second surface pixel;
the first surface pixel points are pixel points corresponding to projection areas formed by projection beams on the first surface of the projection screen, and the second surface pixel points are pixel points corresponding to projection areas formed by the same projection beams on the second surface of the projection screen; the first surface is close to a projection light source emitting the projection light beam, and the second surface is far away from the projection light source.
4. The method of claim 3, wherein determining the mapping relationship comprises:
obtaining the coordinates of the first surface pixel points corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen and the thickness of the projection screen;
and generating the mapping relation according to the corresponding relation between the coordinate of each first surface pixel point and the coordinate of the corresponding second surface pixel point.
5. The method of claim 4, wherein obtaining the coordinates of the first surface pixel point corresponding to each second surface pixel point according to the coordinates of each second surface pixel point, the distance between the projection light source and the projection screen, and the thickness of the projection screen comprises:
determining a distance between the second surface pixel point and a coordinate origin; wherein the origin of coordinates is a surface center point of the projection screen;
determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the coordinate origin, the distance between the projection light source and the projection screen, and the thickness of the projection screen;
and obtaining the coordinates of the first surface pixel points corresponding to the second surface pixel points according to the coordinates of the second surface pixel points, the distance between the second surface pixel points and the origin of coordinates and the projection distance.
6. The method of claim 5, wherein determining the projection distance of the first surface pixel point and the second surface pixel point on the second surface according to the distance between the second surface pixel point and the origin of coordinates, the distance between the projection light source and the projection screen, and the thickness of the projection screen comprises:
multiplying the distance between the second surface pixel point and the coordinate origin and the thickness of the projection screen to obtain a first product;
and performing division operation processing on the first product and the distance between the projection light source and the projection screen to obtain the projection distance of the first surface pixel point and the second surface pixel point on the second surface.
7. The method of claim 5, wherein the coordinates of the second surface pixel point include an x value and a y value, and the corresponding coordinates of the first surface pixel point also include an x value and a y value, and obtaining the coordinates of the first surface pixel point corresponding to the second surface pixel point according to the coordinates of the second surface pixel point, the distance between the second surface pixel point and the origin of coordinates, and the projection distance comprises:
subtracting a first quotient value from the x value of the second surface pixel point to obtain the corresponding x value of the first surface pixel point; wherein the first quotient value is a quotient of a product of the x value of the second surface pixel point and the throw distance and a distance between the second surface pixel point and the origin of coordinates;
subtracting a second quotient value from the y value of the second surface pixel point to obtain a corresponding y value of the first surface pixel point; and the second quotient value is the quotient of the product of the y value of the second surface pixel point and the projection distance and the distance between the second surface pixel point and the coordinate origin.
8. A3D printing apparatus applied to a 3D printing system, the 3D printing system including a processing device, a projection device and a projection screen, the apparatus comprising:
the model slicing module is used for carrying out slicing processing on the 3D model of the object to be printed through the processing equipment to obtain a plurality of slice images of the 3D model;
the contour correction module is used for correcting the contour of each slice image in the slice images according to a preset mapping relation through the processing equipment to obtain a plurality of corrected slice images; wherein the mapping relationship is used for representing the mapping relationship between an original projection position of a projection light beam of the projection device before passing through the projection screen and an offset projection position of the projection light beam after passing through the projection screen;
and the projection forming module is used for sending the corrected slice images to the projection equipment through the processing equipment so as to enable projection beams of the projection equipment to sequentially project the corrected slice images and form the object to be printed on the surface of the projection screen.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110604070.8A 2021-05-31 2021-05-31 3D printing method, device, computer equipment and storage medium Active CN113393567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110604070.8A CN113393567B (en) 2021-05-31 2021-05-31 3D printing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110604070.8A CN113393567B (en) 2021-05-31 2021-05-31 3D printing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113393567A true CN113393567A (en) 2021-09-14
CN113393567B CN113393567B (en) 2024-05-17

Family

ID=77619644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110604070.8A Active CN113393567B (en) 2021-05-31 2021-05-31 3D printing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113393567B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100214537A1 (en) * 2009-02-23 2010-08-26 Thomas Clarence E System and Methods for Angular Slice True 3-D Display
KR20170118398A (en) * 2016-04-15 2017-10-25 주식회사 하나올테크 Auto correction method of the size and arrangement of image projected through a DLP 3D Printer
CN108724726A (en) * 2018-05-24 2018-11-02 广东石油化工学院 A kind of photosensitive resin 3D printer of LCD light source
CN112102460A (en) * 2020-09-17 2020-12-18 上海复志信息技术有限公司 3D printing slicing method, device, equipment and storage medium
CN112677487A (en) * 2020-12-30 2021-04-20 上海联泰科技股份有限公司 Control method and control system for 3D printing and 3D printing equipment
CN112819936A (en) * 2021-01-29 2021-05-18 深圳锐沣科技有限公司 Three-dimensional printing method, three-dimensional printing device, three-dimensional printing control equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100214537A1 (en) * 2009-02-23 2010-08-26 Thomas Clarence E System and Methods for Angular Slice True 3-D Display
KR20170118398A (en) * 2016-04-15 2017-10-25 주식회사 하나올테크 Auto correction method of the size and arrangement of image projected through a DLP 3D Printer
CN108724726A (en) * 2018-05-24 2018-11-02 广东石油化工学院 A kind of photosensitive resin 3D printer of LCD light source
CN112102460A (en) * 2020-09-17 2020-12-18 上海复志信息技术有限公司 3D printing slicing method, device, equipment and storage medium
CN112677487A (en) * 2020-12-30 2021-04-20 上海联泰科技股份有限公司 Control method and control system for 3D printing and 3D printing equipment
CN112819936A (en) * 2021-01-29 2021-05-18 深圳锐沣科技有限公司 Three-dimensional printing method, three-dimensional printing device, three-dimensional printing control equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩江等: "防止3D打印模型特征偏移的自适应分层方法", 《合肥工业大学学报(自然科学版)》, pages 1 - 6 *

Also Published As

Publication number Publication date
CN113393567B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
KR102461093B1 (en) Depth image providing apparatus and method
CN110047100A (en) Depth information detection method, apparatus and system
JP2015096812A (en) Image processor, imaging device and distance correction method
CN114274501B (en) Continuous printing method and device for 3D printer, computer equipment and storage medium
US20190325593A1 (en) Image processing apparatus, system, method of manufacturing article, image processing method, and non-transitory computer-readable storage medium
CN113591300A (en) 3D printing file generation method and device, computer equipment and storage medium
US10510163B2 (en) Image processing apparatus and image processing method
CN111145167A (en) Flatness detection method and device, computer equipment and storage medium
CN113393567B (en) 3D printing method, device, computer equipment and storage medium
US11043009B2 (en) Method and device for calibrating depth of 3D camera, and computer device
WO2022254854A1 (en) Three-dimensional measurement device
US20220262026A1 (en) Depth image generation method and apparatus, reference image generation method and apparatus, electronic device, and computer-readable storage medium
CN109410304B (en) Projection determination method, device and equipment
CN113487685A (en) Calibration method, device and equipment of line laser scanning camera and storage medium
US10035297B2 (en) Apparatus and method for generating bitmap of 3-dimensional model
US10552975B2 (en) Ranking target dimensions
CN112200864A (en) Image processing method, positioning method, device, equipment and storage medium
US20240029288A1 (en) Image processing apparatus, image processing method, and storage medium
CN117409076B (en) Method, device, computer equipment and storage medium for detecting alignment
CN113360102B (en) Method and device for generating print file, computer equipment and storage medium
CN111460199B (en) Data association method, device, computer equipment and storage medium
CN116295031B (en) Sag measurement method, sag measurement device, computer equipment and storage medium
US20170016721A1 (en) Determination system, determination method, and non-transitory computer readable medium
CN111063036B (en) Three-dimensional character arrangement method, medium, equipment and system based on path planning
CN116147530A (en) Surface imaging method, storage medium and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant