WO2023001306A1 - 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 - Google Patents

光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2023001306A1
WO2023001306A1 PCT/CN2022/107529 CN2022107529W WO2023001306A1 WO 2023001306 A1 WO2023001306 A1 WO 2023001306A1 CN 2022107529 W CN2022107529 W CN 2022107529W WO 2023001306 A1 WO2023001306 A1 WO 2023001306A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical system
image
value
grayscale
exposure surface
Prior art date
Application number
PCT/CN2022/107529
Other languages
English (en)
French (fr)
Inventor
丁鹏
胡骏
曾宏庆
万欣
徐斌
Original Assignee
广州黑格智造信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州黑格智造信息科技有限公司 filed Critical 广州黑格智造信息科技有限公司
Priority to EP22845469.0A priority Critical patent/EP4343682A1/en
Priority to AU2022314858A priority patent/AU2022314858A1/en
Publication of WO2023001306A1 publication Critical patent/WO2023001306A1/zh
Priority to US18/393,477 priority patent/US20240131793A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • B29C64/393Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • B29C64/264Arrangements for irradiation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • B33Y50/02Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/10Processes of additive manufacturing
    • B29C64/106Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material
    • B29C64/124Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material using layers of liquid which are selectively solidified
    • B29C64/129Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material using layers of liquid which are selectively solidified characterised by the energy source therefor, e.g. by global irradiation combined with a mask
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • B29C64/264Arrangements for irradiation
    • B29C64/286Optical filters, e.g. masks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y10/00Processes of additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y30/00Apparatus for additive manufacturing; Details thereof or accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality

Definitions

  • the present application relates to the technical field of optical systems, in particular to an exposure surface calibration method, a calibration measurement method, a device, computer equipment and a storage medium of an optical system.
  • the 3D printing technology of the related art realizes the surface printing process in the field of light curing, usually using DLP and LCD technology.
  • DLP photo-curing printer uses high-resolution DLP devices and ultraviolet light sources to project the cross-section of the three-dimensional model on the workbench, so that the liquid photopolymer can be photo-cured layer by layer.
  • LCD stereolithography printers are similar, except that an LCD display is used instead of a DLP projection system to display the cross section directly on the workbench.
  • Both DLP and LCD belong to the printing surface exposure technology, and 3D printing requires that the irradiance of each part of the exposure surface be consistent. If the difference in irradiance within the exposure surface is too large, it will cause vertical lines on the surface of the printed product. In severe cases, the printing will fall off the board, resulting in printing failure.
  • the commonly used light source calibration technology is to divide the entire format of the printing area into several measurement points, use an optical power meter to measure the irradiance of each point, and obtain the irradiance distribution data of each point. Then calculate the gray level compensation value of each point in reverse according to the data distribution, and use the gray level projection or display after the compensation value calculation in the area of each point position to achieve the uniformity calibration of the entire exposure surface.
  • the point irradiance measurement after the format division can only explain the point irradiance, which belongs to discrete point values.
  • discrete point values are used to replace the regional distribution values of the corresponding split planes, that is, discrete points are used to represent the uniform distribution in the plane, and the distribution values between discrete points are replaced by two points, resulting in a decrease in uniformity correction accuracy.
  • the long-term use of the printing equipment the loss and replacement of components in the optical system will lead to changes in the gray compensation value after the previous calibration, and it is necessary to continue to correct the uniformity of the printing area, which consumes more manpower and time.
  • the present application provides an exposure surface calibration method, device, computer equipment and storage medium of an optical system, aiming at improving the calibration accuracy and calibration efficiency of the exposure surface of the optical system.
  • the present application provides a method for calibrating the exposure surface of an optical system, and the method for calibrating the exposure surface of the optical system may include:
  • the digital mask is used to perform mask compensation on the projected light image emitted by the optical system, and a printed image with uniform irradiance value on the exposure surface is obtained.
  • the present application provides a calibration measurement method for 3D printing.
  • the calibration measurement method may include: using the above exposure surface calibration method of the optical system to calibrate the exposure surface of the optical system; An irradiance value and a second irradiance value corresponding to the black image; wherein, both the white image and the black image are projected by a calibrated optical system; according to the first irradiance value and the second irradiance value, Obtain the static contrast of the optical system; obtain the irradiance value of each area in the checkerboard diagram; wherein, the checkerboard diagram is obtained by projecting the calibrated optical system; use the ANSI contrast calculation method to process the irradiance value of each area, and obtain Dynamic contrast ratio of optical systems.
  • the present application provides a method for calibrating an exposure surface of an optical system, and the method for calibrating an exposure surface of an optical system may include:
  • the compensation parameters are used to perform mask compensation on the light projection image emitted by the optical system, and obtain the exposure Printed images with uniform irradiance values on all sides.
  • an exposure surface calibration device for an optical system which may include:
  • the image acquisition unit is configured to acquire the grayscale distribution image generated by the photographing module on the exposure surface of the optical system
  • a fitting unit configured to divide the grayscale distribution image into a grid image comprising a plurality of segmented regions, and calculate the fitted grayscale value of each segmented region
  • the selection unit is configured to select the minimum fitting gray value as the reference gray value among all the calculated fitting gray values, and calculate gray compensation coefficients corresponding to other segmented regions according to the reference gray value, so as to Generate a digital mask;
  • the mask compensation unit is configured to use the digital mask to perform mask compensation on the projected light image emitted by the optical system, so as to obtain a printed image with a uniform irradiance value on the exposure surface.
  • the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the above-mentioned method for calibrating the exposure surface of the optical system and The calibration measurement method described above for 3D printing.
  • the present application provides a computer-readable storage medium, on which a computer program is stored.
  • the computer program is executed by a processor, the above-mentioned method for calibrating the exposure surface of the optical system and the above-mentioned method for 3D printing are implemented. Calibration measurement method.
  • the present application provides a computer program product, the computer program product includes a computer program, and when the computer program is executed by a processor, the above-mentioned exposure surface calibration method of the optical system and the above-mentioned calibration measurement method for 3D printing are implemented.
  • the application provides an exposure surface calibration method, device, computer equipment, and storage medium of an optical system.
  • the method includes: using a reference light source to perform flat-field correction on the shooting module; Generated grayscale distribution image; divide the grayscale distribution image into a grid image containing multiple segmented areas, and calculate the fitted grayscale value of each segmented area; select the smallest of all the fitted grayscale values obtained from the calculation Fit the grayscale value as the reference grayscale value, and calculate the grayscale compensation coefficients corresponding to other segmented regions according to the reference grayscale value to generate a digital mask; use the digital mask to mask the light projection image emitted by the optical system Compensation to obtain a printed image with uniform irradiance values on the exposed surface.
  • This application can effectively improve the calibration accuracy and calibration efficiency of the exposure surface of the optical system by obtaining all pixel-level distributions of the irradiance values of the exposure surface, converting the discrete distribution with a lower density, and then performing corresponding compensation for the gray value.
  • FIG. 1 is a schematic flowchart of a method for calibrating an exposure surface of an optical system provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for calibrating an exposure surface of an optical system provided by another embodiment of the present application;
  • FIG. 3 is a schematic sub-flow diagram of a method for calibrating an exposure surface of an optical system provided in an embodiment of the present application
  • FIG. 4 is another schematic flowchart of a method for calibrating an exposure surface of an optical system provided in an embodiment of the present application
  • FIG. 5 is an example schematic diagram of a method for calibrating an exposure surface of an optical system provided by an embodiment of the present application
  • FIG. 6 is a schematic diagram of another example of a method for calibrating an exposure surface of an optical system provided by an embodiment of the present application.
  • Fig. 7 is a schematic flow chart of a 3D printing calibration measurement method provided by the embodiment of the present application.
  • FIG. 8 is a schematic block diagram of a calibration device for an exposure surface of an optical system provided by an embodiment of the present application.
  • FIG. 9 is a sub-schematic block diagram of a calibration device for an exposure surface of an optical system provided by an embodiment of the present application.
  • FIG. 10 is another schematic block diagram of a calibration device for an exposure surface of an optical system provided by an embodiment of the present application.
  • Fig. 11 is a schematic block diagram of an optical system provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method for calibrating an exposure surface of an optical system provided in FIG. 1 according to an embodiment of the present invention, which specifically includes steps S101 to S105.
  • the fitted gray value may refer to the value obtained by mapping the gray distribution in the region; among all the calculated fitted gray values, a predetermined fitted gray value is selected as the reference gray value, wherein the predetermined
  • the fitting gray value can be adjusted according to the practical experience of the optical system, and the predetermined fitting gray value can be any fitting gray value among all the calculated fitting gray values as required.
  • the predetermined fitting grayscale value may be equal to the minimum fitting grayscale value ⁇ 50%+the maximum fitting grayscale value ⁇ 50%.
  • the predetermined fitting grayscale value may be equal to the minimum fitting grayscale value ⁇ 75%+the maximum fitting grayscale value ⁇ 25%. It should be noted that, the fitting gray value may also be predetermined or other values may be used as the reference gray value.
  • the minimum fitting gray value may be selected as the reference gray value.
  • the time machine projects with the maximum brightness when performing light uniformity calibration.
  • the reference gray value does not adopt the minimum fitting gray value, it is necessary to calibrate while reducing the brightness of the light machine, so that the gray value can also be increased at points below the reference value, so as to achieve the gray value of the exposure surface. calibration uniformity.
  • FIG. 2 is a schematic flow chart of an exposure surface calibration method for an optical system provided by another embodiment of the present application. Specifically, it can be Including: steps S202-S205.
  • the light guide film is used on the exposure surface to receive the image of the full-scale white image projected or displayed by the optical system, that is, the projected light image, and the 2 n -1 full-scale white image in the n-bit image is set to be used , taking an 8-bit image as an example, it can be set to use 255 full-scale white images in 0-255 gray scales.
  • the imaging surface of the full white image can be seen from the back of the light guide film, which is the printing surface of the printer.
  • the reference light source in this embodiment is a reference high-uniform surface light source in the flat-field correction technology, that is, before S202, a step is also included: S201, using the reference light source to perform flat-field correction on the camera module.
  • the calibration accuracy and calibration efficiency for the exposure surface of the optical system can be effectively improved.
  • the non-contact overall measurement is realized through the shooting module, avoiding multi-step repeated operations and saving measurement time.
  • the calibration accuracy of this embodiment can also be adjusted according to the actual situation without adding additional operation steps.
  • the irradiance value mentioned in this application can also be represented by another physical quantity, light intensity, and the two can be converted into each other.
  • a method for calibrating an exposure surface of an optical system including:
  • the compensation parameters are used to perform mask compensation on the light projection image emitted by the optical system, and obtain the exposure Printed images with uniform irradiance values on all sides.
  • the image information may be gray scale, brightness and other similar physical quantities.
  • the image information distribution image is a grayscale information distribution image or a brightness information distribution image, etc.
  • the compensation parameter may be a compensation coefficient or a compensation value.
  • the mapped image information value is obtained by mapping the gray distribution in the grid, which can be obtained by fitting or interpolation. Fitting algorithms include, but are not limited to, least squares, polynomial fitting, and cubic spline fitting.
  • a digital mask can be generated through the compensation parameters, and the digital mask can be used to perform mask compensation on the projected image, so as to obtain a printed image with uniform irradiance value on the exposure surface.
  • the step of acquiring the distribution image of the image information generated by the photographing module on the exposure surface of the optical system it further includes: using a reference light source to perform flat-field correction on the photographing module.
  • step S201 may include:
  • the reference light source projects an exposure surface with a uniform irradiance value according to the preset gray value
  • the shooting module shoots the exposure surface of the reference light source to obtain the image of the reference light source
  • the flat-field correction is performed on the shooting module according to the gray scale correction coefficient of each pixel unit.
  • the gray value is set for the reference light source (ie, the reference high-uniform surface light source), so as to project an exposure surface with a uniform irradiance value.
  • the reference light source is captured by the photographing module, so that each pixel unit in the photosensitive chip can receive light of the same energy at the same time.
  • the reference light source is taken globally through the shooting module, that is, the uniform surface light source is larger than the range of the imaging surface of the shooting module, so that the photosensitive chip in the shooting module can be captured by the imaging objective lens
  • the preset grayscale value may be 2 n ⁇ 1, and if the grayscale of an 8-bit image is used, the preset grayscale value is 255.
  • the shooting module shoots a standard high-uniform surface light source
  • the detected brightness of the center and edge of the photosensitive chip is different, and different images appear on the captured image. Displayed in grayscale.
  • Flat-field correction is to compare the grayscale value sensed by the photosensitive chip with the known reference high-uniform surface light source. Taking the grayscale of an 8-bit image as an example, the grayscale value of the reference light source is set to 255, and the shot can be obtained.
  • the gray scale compensation value of each pixel of the module After using the compensation value to correct the module, the true distribution of the uniformity of the light source can be obtained by shooting the reference high-uniformity surface light source, and the shooting module completes the flat-field correction.
  • the shooting module shoots the exposure surface of the reference light source to obtain the image of the reference light source, including:
  • the image capturing surface of the shooting module is divided into several image capturing sub-regions;
  • a number of reference light source sub-images are spliced to obtain a reference light source image.
  • the orientation plane of the camera module is divided according to the exposure size of the reference light source, and then the divided imaging sub-regions are projected by moving the reference light source to obtain corresponding reference light source sub-images.
  • a surface light source with a smaller area and uniform height is used as the reference light source, and the length and width of the reference light source are set to be 1/p of the length of the image-taking surface and 1/q of the width, respectively, wherein the image-taking surface Conjugated with the imaging surface, that is, for the imaging surface divided into p*q regions, there are corresponding number of p*q regions mapped one by one on the imaging surface. Both p and q are integers.
  • the p*q regions are spliced region by region into a uniform imaging surface, and the flat field correction is completed on the shooting module by using the uniform imaging surface.
  • Figure 6 wherein Figure 6 is obtained by conjugating Figure 5, set the reference light source in area 1, and the corresponding imaging area is 1', move the reference light source to area 2, and the corresponding imaging area is 2'. Move the same reference light source area by area.
  • the reference light source corresponds to imaging p*q areas. After splicing area by area, a uniform imaging surface covering the photosensitive chip can be generated on the imaging surface. Through this imaging surface, the camera module can be flat-field corrected.
  • An optical system is an imaging system that can produce a clear image that is completely similar to an object.
  • a beam in which all rays or their extensions intersect at the same point is called a concentric beam.
  • the outgoing beam After the incident concentric beam is transformed into an optical system, the outgoing beam must also be a concentric beam.
  • the intersection points of the incoming and outgoing concentric beams are called the object point and the image point, respectively.
  • the ideal optical system has the following properties: 1. After all the rays intersecting the object point pass through the optical system, the outgoing rays all intersect at the image point. vice versa. The point at which the pair of objects and images are interchangeable is called the conjugate point. 2.
  • Each straight line in the object space corresponds to a straight line in the image space called the conjugate line; the corresponding plane is called the conjugate plane. 3.
  • the conjugate plane is still perpendicular to the optical axis. 4.
  • the lateral magnification is constant.
  • the method for calibrating the exposure surface of the optical system may further include: steps S301 - S303 .
  • the irradiance of the optical system can also be monitored to know the aging state of the optical system.
  • hardware parameters of the projected image are first set, and the hardware parameters may include the exposure of the 3D printer, the gain of the camera module, and image processing conditions.
  • the exposure of the photographing module is fixed, and the fixed exposure keeps the gray scale peak value of the projected light image below 2 n ⁇ 1 (the gray scale of the 8-bit image is 255).
  • the grayscale calibration of the projected light image is carried out to obtain the coordinate points corresponding to the irradiance value of the captured image grayscale.
  • the irradiance value measuring device can be used to measure different irradiance values, so as to adjust the irradiance value of the projected light image, and then obtain the corresponding gray value through the shooting module.
  • the coordinate points of the irradiance value can be fitted according to the adjustment results (that is, different irradiance values and corresponding gray values) to generate a relationship curve between the gray value and the irradiance value.
  • the shooting module reads the gray scale of the projected light image, it can determine the corresponding irradiance value according to the relationship curve.
  • the exposure surface calibration method of the optical system may also include:
  • a second relationship between the radiation control parameters of the optical system and the radiation data is obtained; the second relationship is used for adjusting the radiation data during the 3D printing process.
  • the irradiation control parameter is used to adjust irradiation data (such as irradiation brightness, optical power, etc.), for example, may be related parameters such as current, or may be input brightness, input voltage, input electric power, etc.
  • Image information is used to reflect the characteristics of the image, which can be expressed in the form of a matrix, such as grayscale, brightness and other related parameters, where the grayscale can be the average grayscale value of all pixels in a certain area, or the average grayscale value of all pixels in a certain area median, total, etc.
  • the irradiance data is used to characterize the electromagnetic radiation-related information of the light projection image, for example, parameters such as irradiance intensity, light intensity, illuminance, or optical power.
  • the maximum light projection area of the reference light source is not smaller than the light projection area of the optical system, so as to ensure that each segmented area can find a position corresponding to the mapping relationship.
  • the following methods can be used to obtain the radiation control parameters of the optical system and the corresponding image information: input each preset current value to the optical system, and obtain the optical system based on each The gray value of the projected light image emitted by the preset current value is used to obtain the gray value corresponding to each preset current value.
  • Obtaining the first relationship between the image information of the optical system and the radiation data may include: obtaining the third relationship between the image information of the reference light source and the image information of the optical system, and the relationship between the image information of the reference light source and the radiation data of the reference light source.
  • the fourth relationship between the illumination data; the first relationship is obtained based on at least the third relationship and the fourth relationship; wherein, the third relationship satisfies that the image information of the reference light source is consistent with or deviates from the image information of the optical system.
  • the third relationship can be obtained by the following means: obtaining the fifth relationship between the grayscale of the optical system and the radiation data, and the sixth relationship between the grayscale of the reference light source and the radiation data; based on the fifth relationship and the sixth relationship , to obtain the third relationship between the gray scale of the reference light source and the gray scale of the optical system.
  • the fourth relationship may be in the form of a mapping table or in the form of a fitting curve, which is not specifically limited here. Taking the image information as an example of the gray value, by adjusting the irradiance data of the exposure surface of the reference light source, and using the shooting module to obtain the corresponding gray value under each irradiance data, that is, the shooting module can be used to obtain any radiation value.
  • the relationship between the gray value of the reference light source and the irradiation data can be generated. It should be noted that the above first relationship can be obtained not only based on the third relationship and the fourth relationship, but also can be further obtained based on the relationship between the initial irradiation control parameters of the optical system and the irradiation data. Furthermore, the above-mentioned first relationship can also be obtained based on the relationship between other parameters of the optical system.
  • the image information of the reference light source and the image information of the optical system can be determined to be consistent. Therefore, based on the third relationship between the image information of the reference light source and the image information of the optical system, and the fourth relationship between the image information of the reference light source and the radiation data of the reference light source, the image information and radiation data of the optical system can be obtained.
  • the image information of the reference light source deviates from the image information of the optical system.
  • the deviation can be obtained in the following ways: obtain the relationship between the gray scale of the optical system and the radiation data, and the relationship between the gray scale of the reference light source and the radiation data; based on the above two relationships, the gray scale of the reference light source and the optical
  • the first relationship can be obtained based on the seventh relationship, the third relationship, and the fourth relationship.
  • the image information is gray scale and the radiation control parameter is current as an example for further description.
  • the image information of the reference light source and the image information of the optical system can be determined to be consistent
  • Each preset current value is input into the optical system, and the grayscale value of the light projection image emitted by the optical system based on each preset current value is obtained, and the grayscale value corresponding to each preset current value is obtained. Adjust the irradiance data of the exposure surface of the reference light source, and use the shooting module to obtain the corresponding gray value;
  • the irradiation data may be one of irradiation intensity, power, light intensity and other related values.
  • the shooting module can be used to obtain any irradiance data and the irradiance The gray value corresponding to the data, so that the eighth relationship between the gray value and the irradiation data can be generated.
  • the eighth relationship may be in the form of a mapping table or in the form of a fitting curve, which is not specifically limited here. Any fitting algorithm in the field can be used to fit the irradiation data and the corresponding gray value, such as any one of least squares method, polynomial fitting algorithm and cubic spline fitting algorithm.
  • Each preset current value is input to the optical system, and the preset current value can be determined according to the optical system.
  • the current value sent to the optical system can be controlled by the host computer, and the magnitude of the current affects the irradiance data and grayscale of the light projection image.
  • the optical system projects light projection images with different grayscale values under each preset current value.
  • the camera module for grayscale reading, the light projection images projected by the optical system under each preset current value can be obtained gray value of .
  • the irradiation data corresponding to each preset current value can be obtained.
  • the ninth relationship may be in the form of a mapping table or in the form of a fitting curve, which is not specifically limited here.
  • each preset current value and the irradiation data corresponding to each preset current value may be determined according to the form of the ninth relationship.
  • the preset current value and the corresponding irradiation data may be fitted by a fitting algorithm to obtain a fitting curve relationship between the preset current value and the irradiation data.
  • the ninth relationship can be used for 3D printing, and the irradiation data of the optical system is calibrated at this time.
  • it before adjusting the irradiance data of the exposure surface of the reference light source, it further includes: limiting the exposure of the shooting module.
  • the radiation data required for the image of each layer may be different. Therefore, during the printing process, the irradiation data needs to be changed according to the requirements.
  • the irradiance data of the light projection image under the same current will change, so it is necessary to calibrate the irradiance data of the optical system.
  • an optical power meter is often used to manually collect radiation data point by point.
  • large-format printing it is often necessary to sample radiation data for a long time, which is not only inefficient but also requires a high-cost optical power meter.
  • the optical system can be automatically and quickly calibrated without the need for an optical power meter, and labor costs are also saved.
  • Each preset current value and the corresponding irradiation data are processed to obtain a thirteenth relationship between the current value of the optical system and the irradiation data; wherein, the thirteenth relationship is used for adjusting the irradiation data during the 3D printing process.
  • the irradiation data may be one of irradiation intensity, power and other related values.
  • the tenth relationship and the eleventh relationship can be pre-measured and stored in the storage unit, and can be called directly when needed, or can be obtained in a way such as the eighth relationship.
  • the tenth relationship as the fitting curve relationship as an example, the gray value and irradiance data of the optical system can be measured and stored in the storage unit before the optical system leaves the factory, and the gray value and irradiance data can be calculated by the fitting algorithm Fitting is performed to obtain the above tenth relationship.
  • the relationship between the grayscale value of the reference light source and the grayscale value of the optical system can be obtained.
  • the optical system projects light projection images with different grayscale values under each preset current value.
  • the light projection images projected by the optical system under each preset current value can be obtained gray value of .
  • the optical system projects light projection images with different grayscale values under each preset current value, and the corresponding grayscale value in the optical system is obtained by using the camera module for grayscale reading.
  • the compensated gray value in the reference light source can be obtained, and the irradiation data corresponding to each preset current value can be further determined based on the eleventh relationship.
  • irradiation data corresponding to each preset current value can be obtained.
  • the above-mentioned thirteenth relationship can be obtained.
  • the tenth relationship, the eleventh relationship, the twelfth relationship and the thirteenth relationship can all be in the form of a mapping table or a relationship of a fitting curve.
  • the sixth relationship can be used for 3D printing.
  • the optical system that is, the radiation data. The above method solves the gray scale difference between the reference light source and the optical system under unified irradiation data by obtaining the grayscale relationship between the reference light source and the optical system, and can further improve the calibration accuracy of the irradiation data.
  • steps S301 and S302 can be performed during flat field correction.
  • step S202 may include:
  • the imaging plane is photographed by the photographing module after flat-field correction, and the grayscale value of the imaging plane photographed as a whole is at the maximum grayscale (if the grayscale image displayed by the grayscale image The order is from 0 to 255, and the maximum gray value is below 255), the real uniformity distribution of the imaging surface can be obtained, that is, the corresponding pixel-level gray distribution image.
  • step S202 may include:
  • Obtain the grayscale distribution image generated by the shooting module on the exposure surface of the optical system including:
  • the first grayscale distribution image is obtained by shooting the first image projected to the optical system;
  • the second grayscale distribution image is a second image projected to the optical system Obtained by shooting;
  • the first grayscale area in the first image corresponds to the second grayscale area in the second image;
  • the second grayscale area in the first image corresponds to the first grayscale area in the second image;
  • the first gray scale area and the second gray scale area correspond on the gray scale.
  • the first gray scale area can be a white area (that is, 255 gray scales)
  • the second gray scale area can be black area (i.e. 0 grayscale).
  • the white area in the first image corresponds to the black area in the second image, that is, if any position in the first image is a white area, then the corresponding position in the second image is a black area.
  • the black area in the first image corresponds to the white area in the second image, that is, if any position in the first image is a black area, then the corresponding position in the second image is a white area.
  • the first image is captured by the photographing module to obtain a first grayscale distribution image
  • the second image is photographed by the photographing module to obtain a second grayscale distribution image.
  • the grayscale distribution image generated by photographing the exposure surface of the optical system can be obtained by superimposing the first grayscale distribution image and the second grayscale distribution image.
  • the above method obtains the first grayscale distribution image and the second grayscale distribution image through two projections, which can reduce the grayscale difference between the center and the edge, and improve the accuracy of the grayscale value in the grayscale distribution image.
  • the first grayscale area in the first image is spaced from the second grayscale area in the first image; the first grayscale area in the second image is spaced from the second grayscale area in the first image; the first The first grayscale area in the image is circular or square; the first grayscale area in the second image is circular or square, that is, the first image and the second image can be in a checkerboard shape or evenly distributed dot chart.
  • the first image and the second image as dot diagrams as an example the first image may include several light projection areas, and the light projection areas may be squares of the same size. Each dot is set in the light projection area, and the diameter of the dot can be preset.
  • the white dot does not exist in the adjacent area of any light projection area.
  • the second image there is no white dot in any of the light projection areas, but there are white dots in the adjacent areas of the any light projection area.
  • the first image and the second image are superimposed, and the obtained image is an image in which white dots exist in each light projection area.
  • the lens of the shooting module is also provided with a filter for filtering the influence of ambient light on the shooting module.
  • a fitting algorithm is used to calculate the fitting gray value of each segmented area, and the fitting algorithm is a least square method, a polynomial fitting algorithm or a cubic spline fitting algorithm.
  • a fitting algorithm when using a fitting algorithm to calculate the fitting gray value of each segmented area, a least square method, a polynomial fitting algorithm, a cubic spline fitting algorithm, or other fitting algorithms may be used.
  • the grayscale distribution image When the grayscale distribution image is divided into a grid image including a plurality of divided regions, it can be specifically divided into m rows and n columns to form an m*n grid distribution.
  • step S204 may include: steps S401-S403.
  • the minimum grayscale fitting value in the grid image is selected as a benchmark, and compared with other fitting grayscale values, the grayscale compensation coefficient can be obtained.
  • the gray scale compensation coefficients in the grid image can form a digital mask, after which each projection image is compensated by the digital mask, and a printed image with uniform irradiance value on the exposure surface can be obtained.
  • the fitting gray value of the first grid in the grid image is set to P 11 , and so on, the fitting gray value of the last grid is P mn , and the number of items is m*n. grayscale array.
  • the gray value of the required image is the gray value of the image to be projected by the user. When the gray value of the required image is set to a, multiply a by the ratio matrix to obtain the digital mask.
  • a calibration measurement method for 3D printing is also provided.
  • the calibration measurement method also includes multiple measurement steps. By measuring multiple parameters of the optical system, it is possible to It is judged that the parameters of the optical system meet the requirements during the 3D printing process, so that a clear image that is completely similar to the object can be produced, thereby achieving more accurate and efficient 3D printing.
  • FIG. 7 is a schematic flowchart of a calibration measurement method for 3D printing provided by an embodiment of the present application.
  • the calibration measurement method for 3D printing may include: using some implementations according to the present invention
  • the method 100 for calibrating the exposure surface of the optical system according to the method 100 calibrates the exposure surface of the optical system; and steps S701 to S705 of the calibration measurement method as follows.
  • the exposure surface calibration method 100 of an optical system according to some embodiments of the present invention is described in detail with reference to FIG. 1 to FIG.
  • the camera module takes a picture of the resolution test image displayed by the exposure system, uses the CTF and MTF image algorithms to calculate the value of the required spatial resolution, and determines the sharpness of the exposure system.
  • the exposure lens with electric focus system it can also provide focus adjustment feedback.
  • the camera module uses the above steps to extract the gray level distribution of the exposure surface, and according to the continuous distribution of gray levels, if there is a sudden change in the gray level of an area and is lower than the preset threshold , it is determined that the area is dirty;
  • the camera module is calibrated at different heights. According to the principle of constant magnification at different object distances, the size and relative distribution of various images on the exposure surface can be tested.
  • steps S701 to S705 are only exemplary, and these steps may be performed in different orders to achieve measurement of various parameters, which is not limited in the present application.
  • measuring the static contrast and dynamic contrast of the optical system specifically includes the following steps: acquiring the first irradiance value corresponding to the white image and the second irradiance value corresponding to the black image; wherein, the white image Both the black and black images are projected by the calibrated optical system;
  • the irradiance value of each area is processed by ANSI contrast calculation method to obtain the dynamic contrast of the optical system.
  • the static contrast can be measured based on the exposure surface calibration technique according to the present application, which specifically includes the following steps. First, use the light guide film on the exposure surface to receive the image of the full-scale white image projected or displayed by the optical system, that is, the projected image, and set to use and set to use 2 n -1 full-scale white images in the n-bit image, Taking an 8-bit image as an example, you can set a 255 full-scale pure white image in the 0-255 gray scale. The irradiance value of the above-mentioned pure white image is measured by using the irradiance value measuring device of the optical system, so as to obtain the first irradiance value.
  • the light guide film on the exposure surface uses the light guide film on the exposure surface to receive the image of the full-frame pure black image projected or displayed by the optical system, that is, the projected image, and set to use a 0-level pure black image.
  • the irradiance value of the above-mentioned pure black image is measured by using the irradiance value measuring device of the optical system, so as to obtain the second irradiance value.
  • the static contrast ratio of the optical system is calculated by the contrast measuring device according to the ratio of the first irradiance value to the second irradiance value.
  • the dynamic contrast can be measured based on the exposure surface calibration technique according to the present application, which specifically includes the following steps. First, use the light guide film to receive the optical system to project or display the checkerboard pattern on the exposure surface, and use the irradiance value measuring device of the optical system to measure the irradiance value at each point of the above-mentioned checkerboard pattern in sequence, so that Get the irradiance value at each point in the checkerboard. Then, the value of the dynamic contrast of the optical system is calculated by using the ANSI contrast calculation method through the contrast measurement device. In this embodiment, the measurement of the irradiance value adopts the way of machine vision.
  • the irradiance value is obtained based on the exposure surface calibration technology provided in this application, which specifically includes the following steps. Firstly, the grayscale calibration of the projected light image is carried out under a fixed exposure to obtain the coordinate points corresponding to the irradiance value of the image grayscale; after the output irradiance value of the irradiation equipment is changed, the irradiance value measuring device is used to Measure different irradiance values, and then use the imaging module to obtain the gray value of the image. After multiple measurements of different irradiance values, the coordinate points are fitted and generated according to the fitted coordinate points. A curve of grayscale/irradiance values. Therefore, under the same exposure, the imaging module reads the grayscale of the projected light image, and the grayscale value can be converted into the corresponding irradiance value.
  • measuring the clarity of the projected image specifically includes the following steps: controlling the calibrated optical system to project an image to a preset position on the projected light surface; the image includes at least one line in the sagittal direction and at least one line in the meridional direction line; obtain the actual gray distribution curve of the projected image, and confirm the CTF value or MTF value corresponding to each preset position according to the actual gray distribution curve and the preset gray distribution curve; according to the CTF value corresponding to each preset position or MTF, which determines the sharpness of an optical system.
  • the preset positions may include a center position and four corner positions of the light projection surface.
  • CTF Contrast Transfer Function, contrast transfer function
  • MTF Modulation Transfer Function, modulation transfer function
  • determining the clarity of the optical system includes: if any CTF value is less than the first set value, or any MTF value is less than the second set value, Then it is determined that the clarity of the optical system is unqualified. The sharpness is judged according to the CTF value. If the calculated CTF value is less than the set value, it is judged that the point is not clear. The sharpness can also be judged according to the MTF value. If the calculated MTF value is less than the set value, it is judged that the point is not clear. If there are unclear points, it is considered that the clarity of the lens of the optomechanical device is poor.
  • the distance between the lines in the sagittal direction may be N pixels, which is used to determine whether the meridian direction is blurred.
  • the distance between the lines in the meridional direction can be N pixels, etc., and is used to determine whether the sagittal direction is blurred.
  • the width of the line may be N pixels, where N is a positive integer.
  • step S703 detecting whether the optical-mechanical equipment is dirty specifically includes: the value of any point on the actual grayscale distribution curve is lower than the lower limit value, and/or the actual grayscale distribution curve has a sudden curve, then determine the optical Dirt is present on the system.
  • the optical system of the present application performs dirt detection through the following steps.
  • the gray scale distribution of the optomechanical device generally changes continuously in a format. Dirt is considered present if the actual gray level is below the lower limit value and/or if there is a sudden change in the actual gray level distribution curve of the optomechanical device.
  • measuring the size of the photographed object specifically includes: calibrating the size corresponding to each pixel on the photographing surface of the camera module, and determining the size of the photographed object according to the number of pixels occupied by the side length of the photographed object; and/or or,
  • the dimension measuring device of the optical system of the present application performs dimension measurement by the following method.
  • the size corresponding to each pixel on the shooting surface of the camera is pre-calibrated, but according to the number of pixels occupied by the side length of the shooting object in the camera, the size measuring device of the optical system is used to determine the size of the object.
  • the size of the photographing surface of the camera is obtained, and the size of the object is determined using a dimension measuring device of the optical system according to the ratio of the side length of the photographed object to the photographing surface.
  • the optical system of the present application needs to use different camera modules in different detection items.
  • the optical system of the present application can select a corresponding camera module for different detection items and execute a corresponding detection process, thereby realizing automatic detection. That is, the current detection item is determined, and the above steps S701-S704 are executed after selecting the corresponding camera module according to the current detection item.
  • FIG. 8 is a schematic block diagram of an exposure surface calibration device 800 of an optical system provided in this embodiment, and the device 800 includes:
  • the image acquisition unit 802 is configured to acquire a grayscale distribution image generated by photographing the exposure surface of the optical system of the photographing module;
  • the fitting unit 803 is configured to divide the grayscale distribution image into a grid image comprising a plurality of divided regions, and calculate the fitted grayscale value of each divided region;
  • the selection unit 804 is configured to select the minimum fitting gray value from all the calculated fitting gray values as the reference gray value, and calculate gray compensation coefficients corresponding to other segmented regions according to the reference gray value, to generate a digital mask;
  • the mask compensation unit 805 is configured to use a digital mask to perform mask compensation on the projected light image emitted by the optical system, so as to obtain a printed image with a uniform irradiance value on the exposure surface.
  • the device 800 further includes a flat-field correction unit 801 configured to perform flat-field correction on the camera module by using a reference light source.
  • the flat field correction unit 801 may include:
  • the projection unit is configured for the reference light source to project an exposure surface with uniform irradiance value according to the preset gray value;
  • the exposure surface photographing unit is configured to be used for the photographing module to photograph the exposure surface of the reference light source to obtain an image of the reference light source;
  • the data acquisition unit is configured to acquire the grayscale output value of each pixel unit in the photosensitive chip according to the reference light source image, compare the preset grayscale value of the reference light source with the grayscale output value of each pixel unit, and obtain the grayscale output value of each pixel unit
  • the gamma correction coefficient of the unit
  • the coefficient correction unit is configured to perform flat-field correction on the shooting module according to the gray scale correction coefficient of each pixel unit.
  • the exposure surface shooting unit may include:
  • the orientation surface division unit is configured to divide the image-taking surface of the shooting module into several image-taking sub-regions based on the size of the exposure surface of the reference light source;
  • the mobile projection unit is configured to move the reference light source, and the reference light source respectively projects on each imaging sub-area to obtain several reference light source sub-images corresponding to each imaging sub-area;
  • the image stitching unit is configured to stitch several reference light source sub-images to obtain a reference light source image.
  • the exposure surface calibration device 800 of the optical system further includes:
  • a limiting unit 901 configured to limit the exposure of the camera module
  • the curve generation unit 902 is configured to adjust the irradiance value of the exposure surface of the reference light source, and use the shooting module to obtain the corresponding gray value, so as to fit and generate the relationship curve between the gray value and the irradiance value;
  • the grayscale reading unit 903 is configured to use the photographing module to read the grayscale of the projected light image emitted by the optical system based on the relationship curve to obtain the corresponding irradiance value.
  • the image acquisition unit 802 may include:
  • the adjusting unit is configured to adjust the exposure of the shooting module so that the captured grayscale distribution image is below the maximum grayscale.
  • a fitting algorithm is used to calculate the fitting gray value of each segmented area, and the fitting algorithm is a least square method, a polynomial fitting algorithm or a cubic spline fitting algorithm.
  • the selecting unit 804 may include:
  • the area labeling unit 1001 is configured to label all the fitted gray values as P 11 , P 12 , ..., P mn in sequence according to the order of the corresponding segmented areas, and obtain the number of items m*n
  • the grayscale array
  • the calculation unit 1002 is configured to select the minimum value P min in the grayscale module as the minimum fitting grayscale value, and calculate the normalized ratio between the minimum fitting grayscale value and other data in the grayscale array , get a ratio matrix;
  • the multiplication unit 1003 is configured to use the ratio contained in the ratio matrix as the corresponding grayscale compensation coefficient, and then multiply the grayscale value of the preset image by each ratio in the ratio matrix to obtain a corresponding digital mask.
  • the optical system of the present application is an imaging system capable of producing a clear image exactly like an object.
  • the optical system of the present application may also include one or more additional devices to perform multiple measurement steps. By measuring multiple parameters of the optical system, it can be judged that each parameter of the optical system is not stable in 3D printing. The process is in line with the requirements, making it possible to produce a clear image that is completely similar to the object, thereby achieving more accurate and efficient 3D printing.
  • FIG. 11 is a schematic block diagram of an optical system provided by an embodiment of the present application.
  • the optical system 1100 also includes:
  • a contrast measuring device 1101 configured to measure the static contrast and dynamic contrast of the optical system
  • sharpness measurement device 1102 configured to measure the sharpness of the projected image
  • the dirt measuring device 1103 is configured to detect whether there is dirt on the optical mechanical equipment
  • the size measuring device 1104 is configured to measure the size of the photographed object.
  • the optical system of the present application can measure the static contrast and dynamic contrast of the optical system through the contrast measurement device 1101 .
  • the contrast measurement device 1101 may be configured to obtain a first irradiance value corresponding to a white image and a second irradiance value corresponding to a black image; wherein, both the white image and the black image are projected by a calibrated optical system ; According to the first irradiance value and the second irradiance value, the static contrast of the optical system is obtained; the irradiance value of each area in the checkerboard diagram is obtained; wherein, the checkerboard diagram is obtained by projecting the calibrated optical system ; Utilize the ANSI contrast calculation method to process the irradiance value of each area to obtain the dynamic contrast of the optical system.
  • the optical system of the present application further includes a sharpness measurement device 1102, and the sharpness measurement device 1102 is used to measure the sharpness of the optical system.
  • the sharpness measurement device 1102 may be configured to control the calibrated optical system to project an image to a preset position on the light projection plane; the image includes at least one line in the sagittal direction and at least one line in the meridional direction; acquire the projected image The actual gray distribution curve, and confirm the CTF value corresponding to each preset position according to the actual gray distribution curve and the preset gray distribution curve; determine the clarity of the optical system according to the CTF value corresponding to each preset position. It is also used to determine that the clarity of the optical system is unqualified if any CTF value is smaller than the set value.
  • the optical system of the present application further includes a dirt measuring device 1103, which can detect whether there is dirt in the optical mechanical equipment.
  • the dirt measuring device 1103 is configured to determine that there is dirt on the optical system when the value of any point on the actual gray scale distribution curve is lower than the lower limit value, and/or the actual gray scale distribution curve has a sudden curve.
  • the optical system of the present application further includes a size measuring device 1104, which is used to measure the size of the object, so as to realize more accurate 3D printing.
  • the size measuring device 1104 is configured to calibrate the size corresponding to each pixel on the shooting surface of the camera module, and determine the size of the shooting object according to the number of pixels occupied by the side length of the shooting object; and/or, obtain the camera
  • the size of the shooting surface in the module is determined according to the ratio of the side length of the shooting object to the shooting surface to determine the size of the shooting object.
  • the optical system of the present application uses the contrast measurement device 1101 , the sharpness measurement device 1102 , the dirt measurement device 1103 and/or the size measurement device 1104 to perform different detection items.
  • Different camera modules need to be used in different inspection projects.
  • the optical system of the present application can select a corresponding camera module for different detection items and execute a corresponding detection process, thereby realizing automatic detection.
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed, the steps provided in the above-mentioned embodiments can be realized.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the embodiment of the present application also provides a computer device, which may include a memory and a processor.
  • a computer program is stored in the memory.
  • the processor invokes the computer program in the memory, the steps provided in the above embodiments can be implemented.
  • the computer equipment may also include components such as various network interfaces and power supplies.
  • the optical system of the present application is preferably the optical system of a 3D printer, including DLP (Digital Light Processing, digital light processing) optical-mechanical equipment, or optical-mechanical equipment comprising LCD (Liquid Crystal Display, liquid crystal display), or comprising LCOS (Liquid Crystal On Silicon, liquid crystal on silicon) optomechanical equipment, or optomechanical equipment including OLED (Organic Light-Emitting Diode, organic light-emitting diode), Micro-LED, Mini-LED, liquid crystal projection, etc.
  • DLP Digital Light Processing, digital light processing
  • optical-mechanical equipment or optical-mechanical equipment comprising LCD (Liquid Crystal Display, liquid crystal display), or comprising LCOS (Liquid Crystal On Silicon, liquid crystal on silicon) optomechanical equipment, or optomechanical equipment including OLED (Organic Light-Emitting Diode, organic light-emitting diode), Micro-LED, Mini-LED, liquid crystal projection, etc.
  • DLP Digital Light Processing, digital light processing
  • LCD Liquid Crystal Display,
  • the application provides an exposure surface calibration method, device, computer equipment, and storage medium of an optical system.
  • the method includes: using a reference light source to perform flat-field correction on the shooting module; Generated grayscale distribution image; divide the grayscale distribution image into a grid image containing multiple segmented areas, and calculate the fitted grayscale value of each segmented area; select the smallest of all the fitted grayscale values obtained from the calculation Fit the grayscale value as the reference grayscale value, and calculate the grayscale compensation coefficients corresponding to other segmented regions according to the reference grayscale value to generate a digital mask; use the digital mask to mask the light projection image emitted by the optical system Compensation to obtain a printed image with uniform irradiance values on the exposed surface.
  • the calibration accuracy and calibration efficiency of the exposure surface of the optical system can be improved.
  • the present application also discloses a calibration measurement method for 3D printing.
  • the calibration measurement method for 3D printing can judge that the parameters of the optical system meet the requirements during the 3D printing process by measuring multiple parameters of the optical system, so that a clear image that is completely similar to the object can be produced, thereby Realize more accurate and efficient 3D printing.
  • the exposure surface calibration method, calibration measurement method, device, computer equipment and storage medium of the optical system of the present application are reproducible and can be used in various industrial applications.
  • the exposure surface calibration method, calibration measurement method, device, computer equipment and storage medium of the optical system of the present application can be used in the technical field of optical systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manufacturing & Machinery (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了光学系统的曝光面校准方法、装置、计算机设备及存储介质,该方法包括:利用基准光源对拍摄模组进行平场校正;获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;将所述灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据所述基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;利用所述数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。本申请通过获取曝光面的像素级分布,并进行灰度值的对应补偿,可提高对于光学系统曝光面的校准精度和校准效率。本申请还公开了一种用于3D打印的校准测量方法。所述用于3D打印的校准测量方法通过对光学系统的多个参数进行测量,可以判断光学系统的各参数在3D打印过程中是符合要求的,从而能产生清晰的、与物完全相似的像,进而实现更精准高效的3D打印。

Description

光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质
相关申请的交叉引用
本申请要求于2021年07月22日提交中国国家知识产权局的申请号为202110832043.6、名称为“光学系统的曝光面校准方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及光学系统的技术领域,特别涉及光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质。
背景技术
相关技术的3D打印技术在光固化领域实现面打印工艺,通常使用DLP和LCD技术。DLP光固化打印机是利用高分辨率的DLP器件和紫外光源,将三维模型的截面投影在工作台上,使液态光聚合物逐层进行光固化。LCD光固化打印机与此类似,不同之处在于使用LCD显示屏替代DLP投影系统将截面直接显示在工作台上。DLP和LCD均属于打印面曝光技术,3D打印时要求曝光面内每一处的辐照度保持一致。如果曝光面内的辐照度差异过大,会导致打印的成品表面有竖纹,严重时打印会脱落掉板,造成打印失败。
通常使用的光源校准技术是将打印区域的整个幅面分割为若干测量点位,使用光功率计对各个点位的辐照度进行测量,获得各个点位的辐照度分布数据。再根据数据分布反向计算各个点位的灰度补偿值,每一个点位的区域使用补偿值计算后的灰度投影或显示,以实现整个曝光面的均匀度校准。
但是这种方法存在比较大的限制,例如幅面分割后的点位辐照度测量只能说明该点位的辐照度,属于离散的点位值。然而使用离散的点位值替代对应分割面的区域分布值,即用离散的点表征平面内的均匀分布状况,离散点之间的分布值被两点替代造成均匀性校正精度的下降。通常使用增加幅面分割的点位来提高精度,但是这样会造成检测时间和难度的上升。且打印设备的长期使用,光学系统内的器件损耗和更换会导致前一次校准后的灰度补偿值发生改变,需要继续对打印区域的均匀度进行校正,消耗较多的人力和时间投入。
发明内容
本申请提供了一种光学系统的曝光面校准方法、装置、计算机设备及存储介质,旨在提高对于光学系统曝光面的校准精度和校准效率。
第一方面,本申请提供了一种光学系统的曝光面校准方法,光学系统的曝光面校准方法可以包括:
利用基准光源对拍摄模组进行平场校正;
获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;
将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
第二方面,本申请提供了一种用于3D打印的校准测量方法,校准测量方法可以包括:采用如上的光学系统的曝光面校准方法对光学系统的曝光面进行校准;获取白色图像对应的第一辐照度值以及黑色图像对应的第二辐照度值;其中,白色图像和黑色图像均为经校准后的光学系统投射得到;根据第一辐照度值和第二辐照度值,得到光学系统的静态对比度;获取棋盘格图中各区域的辐照度值;其中,棋盘格图为经校准后的光学系统投射得到;利用ANSI对比度计算方法处理各区域的辐照度值,得到光学系统的动态对比度。
第三方面,本申请提供了一种光学系统的曝光面校准方法,光学系统的曝光面校准方法可以包括:
获取由拍摄模组对光学系统的曝光面拍摄生成的图像信息分布图像;
将图像信息分布图像进行分割处理,并计算每一分割区域的映射图像信息值;
从各映射图像信息值中选取基准映射图像信息值,并根据基准映射图像信息值计算其他分割区域对应的补偿参数;补偿参数用于对光学系统发出的投光图像进行掩膜补偿,并得到曝光面具备均匀辐照度值的打印图像。
第四方面,本申请提供了一种光学系统的曝光面校准装置,可以包括:
图像获取单元,被配置成用于获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;
拟合单元,被配置成用于将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
选取单元,被配置成用于在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
掩膜补偿单元,被配置成用于利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
第五方面,本申请提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述光学系统的曝光面校准方法以及上述用于3D打印的校准测量方法。
第六方面,本申请提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现上述光学系统的曝光面校准方法以及上述用于3D打印的校准测量方法。
第七方面,本申请提供了一种计算机程序产品,计算机程序产品包括计算机程序,计算机程序被处理器执行时实现上述光学系统的曝光面校准方法以及上述用于3D打印的校准测量方法。
本申请提供了一种光学系统的曝光面校准方法、装置、计算机设备及存储介质,该方法包括:利用基准光源对拍摄模组进行平场校正;获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。本申请通过获取曝光面辐照度值的所有像素级分布,并转化密度较低的离散型分布,再进行灰度值的对应补偿,可有效提高对于光学系统曝光面的校准精度和校准效率。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请的一个实施例提供的一种光学系统的曝光面的校准方法的流程示意图;
图2为本申请的另一实施例提供的一种光学系统的曝光面的校准方法的流程示意图;
图3为本申请实施例提供的一种光学系统的曝光面的校准方法的子流程示意图;
图4为本申请实施例提供的一种光学系统的曝光面的校准方法的另一流程示意图;
图5为本申请实施例提供的一种光学系统的曝光面的校准方法的示例示意图;
图6为本申请实施例提供的一种光学系统的曝光面的校准方法的另一示例示意图;
图7为本申请实施例提供的一种3D打印的校准测量方法的流程示意图;
图8为本申请实施例提供的一种光学系统的曝光面的校准装置的示意性框图;
图9为本申请实施例提供的一种光学系统的曝光面的校准装置的子示意性框图;
图10为本申请实施例提供的一种光学系统的曝光面的校准装置的另一示意性框图。
图11为本申请实施例提供的一种光学系统的示意性框图。
具体实施方式
为下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
下面请参见图1,图1为图1为本发明实施例提供的一种光学系统的曝光面校准方法的流程示意图,具体包括:步骤S101~S105。
S101、利用基准光源对拍摄模组进行平场校正;
S102,获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;
S103,将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
S104,在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
S105,利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
应该理解,拟合灰度值可以指对区域内的灰度分布进行映射得到的值;在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,其中,预定拟合灰度值可以根据光学系统的实践经验进行调整,根据需要,预定拟合灰度值可以为计算得到的所有拟合灰度值中的任意拟合灰度值。在一些实施方式中,预定拟合灰度值可以等于最小拟合灰度值×50%+最大拟合灰度值×50%。在另一些实施方式中,预定拟合灰度值可以等于最小拟合灰度值×75%+最大拟合灰度值×25%。需要说明的是,也可以预定拟合灰度值也可以采用其他值作为基准灰度值。
在一些实施例中,可以选取最小拟合灰度值作为基准灰度值。在选取最小拟合灰度值作为基准灰度值时,在进行光均匀性校准时光机以最大亮度进行投图。而在基准灰度值不采用最小拟合灰度值时,需要在降低光机的亮度的情况下进行校准,从而使得在基准值以下的点也能够进行调高灰度,以实现曝光面灰度校准均匀。
下面参考图2详细描述选取最小拟合灰度值作为基准灰度值的实施方例,图2为本申请的另一实施例提供的一种光学系统的曝光面校准方法的流程示意图,具体可以包括:步骤S202~S205。
S202、获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;
S203、将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
S204、在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
S205、利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
本实施例中,首先在曝光面上使用导光膜接收光学系统投出或显示的全幅面白图的图像,即投光图像,并设定使用n位图像中的2 n-1全阶白图,以8位图像为例,可以设定使用0~255灰阶中的255全阶白图,从导光膜背面可见全白图的成像面,即为打印机的打印面。然后在导光膜正上方放置一个拍摄模组对成像面进行拍摄,得到取像面,利用拍摄模组将光学系统的曝光面上各辐照度值转化为不同的灰度,得到对应的灰度分布图像。再对灰度分布图像分割计算拟合灰度值,并得到相应的最小拟合灰度值和灰度补偿系数,由此生成数字掩膜。通过数字掩膜对投光图像进行补偿,即可实现对于曝光面的均匀校准,使最终得到打印图像具备均匀的辐照度值。还需说明的是,本实施例的基准光源为平场校正技术中的基准高均匀面光源,也即在S202之前,还包括步骤:S201、利用基准光源对拍摄模组进行平场校正。
本实施例通过获取曝光面辐照度的所有像素级分布,并转化密度较低的离散型分布,再进行灰度值的对应补偿,可有效提高对于光学系统曝光面的校准精度和校准效率。同时,还通过拍摄模组实现非接触式整体测量,避免多步重复操作,节省测量时间。另外,本实施例的校正精度还可根据实际情况自行调整,无需增加额外操作步骤。
需要说明的是,本申请中提及的辐照度值也可以采用另一物理量光强进行表示,两者可以互相转化使用。
在一个实施例中,还提供了一种光学系统的曝光面校准方法,包括:
获取由拍摄模组对光学系统的曝光面拍摄生成的图像信息分布图像;
将图像信息分布图像进行分割处理,并计算每一分割区域的映射图像信息值;
从各映射图像信息值中选取基准映射图像信息值,并根据基准映射图像信息值计算其他分割区域对应的补偿参数;补偿参数用于对光学系统发出的投光图像进行掩膜补偿,并得到曝光面具备均匀辐照度值的打印图像。
其中,图像信息可以为灰度、亮度以及其他类似物理量。图像信息分布图像则为灰度信息分布图像或亮度信息分布图像等。补偿参数可以为补偿系数或者补偿值等。映射图像信息值为对网格内灰度分布进行映射得到,可以通过拟合或插值的方式进行获取。拟合算法包括但不限于最小二乘法、多项式拟合、三次样条拟合。
具体的,可以通过补偿参数生成数字掩膜,采用数字掩膜对投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。在一个具体示例中,获取由拍摄模组对光学系统的曝光面拍摄生成的图像信息分布图像的步骤之前,还包括:利用基准光源对拍摄模组进行平场校正。
在一实施例中,步骤S201可以包括:
基准光源根据预设灰度值,投影出具备均匀辐照度值的曝光面;
拍摄模组对基准光源的曝光面拍摄,得到基准光源图像;
根据基准光源图像,获取感光芯片中各像素单元的灰度输出值,将基准光源的预设灰度值和各像素单元的灰度输出值对比,得到各像素单元的灰度校正系数;
根据各像素单元的灰度校正系数对拍摄模组进行平场校正。
本实施例中,首先为基准光源(即基准高均匀面光源)设置灰度值,以投影出具备均匀辐照度值的曝光面。然后通过拍摄模组对基准光源进行取像,使感光芯片中的每一个像素单元能够同时接收到相同能量的光线。然后根据感光芯片的输出数据完成拍摄模组的平场校正。具体到本实施例中,通过拍摄模组对基准光源进行全局取像,即均匀的面光源大于拍摄模组的取像面范围,如此可以使拍摄模组中的感光芯片被取像物镜采集的像面完全覆盖。具体地,预设灰度值可以为2 n-1,若采用8位图像的灰阶,预设灰度值则为255。
因拍摄模组内的镜头渐晕和相机内感光芯片的感光差异影响,拍摄模组拍摄基准高均匀面光源时,感光芯片中心和边缘的探知的亮度有差异,在拍摄图像上呈现出不同的灰度显示。平场校正是将感光芯片感知的灰度值与已知的基准高均匀面光源进行比较,以8位图像的灰阶为例,基准光源的灰度值设定为255,即可得到该拍摄模组每一个像素的灰度补偿值,使用该补偿值校正模组后,拍摄基准高均匀度面光源可得到该光源均匀度的真实分布,该拍摄模组完成平场校正。
进一步的,在一实施例中,所拍摄模组对基准光源的曝光面拍摄,得到基准光源图像,包括:
基于基准光源的曝光面尺寸,将拍摄模组的取像面划分为若干个取像子区域;
移动基准光源,基准光源分别对各取像子区域投影,得到若干个对应于各取像子区域的基准光源子图像;
将若干个基准光源子图像拼接,得到基准光源图像。
该优选实施例中,根据基准光源的曝光尺寸对拍摄模组的取向面进行划分,进而通过移动基准光源对划分得到的各个取像子区域进行投影,以得到对应的基准光源子图像。
在该优选实施例中,使用一个较小面积高度均匀的面光源作为基准光源,设置基准光源的长度和宽度分别为取像面长度的1/p和宽度的1/q,其中,取像面与成像面共轭,即对于划分为p*q区域的取像面,在成像面有对应数量的p*q区域一一映射。p和q均为整数。
基于基准光源的长度和宽度将基准光源对应为成像面的p*q个区域;
将p*q个区域逐区域拼接为一均匀成像面,利用均匀成像面对拍摄模组完成平场校正。
结合图5和图6,其中,图6由图5共轭得到,设定基准光源在区域1,对应成像的区域为1’,移动基准光源到区域2,对应成像的区域为2’。逐区域移动同一个基准光源,基准光源对应成像p*q个区域,逐区域拼接后可以在成像面生成一个覆盖感光芯片的均匀成像面,通过此成像面可以对拍摄模组完成平场校正。
光学系统是能产生清晰的、与物完全相似的像的成像系统。光束中各条光线或其延长线均交于同一点的光束称为同心光束。入射的同心光束经理想光学系统后,出射光束必定也是同心光束。入射和出射同心光束的交点分别称为物点和像点。理想光学系统具有下述性质:1、交于物点的所有光线经光学系统后,出射光线均交于像点。反之亦然。这一对物像可互换的点称为共轭点。2、物方的每条直线对应像方的一条直线称共轭线;相对应的面称共轭面。3、任何垂直于光轴的平面,其共轭面仍与光轴垂直。4、对垂直于光轴的一对共轭平面,横向放大率为常量。
在一实施例中,如图3所示,光学系统的曝光面校准方法还可以包括:步骤S301~S303。
S301、限定拍摄模组的曝光量;
S302、调节基准光源曝光面的辐照度值,并利用拍摄模组获取对应的灰度值,以此拟合生成灰度与辐照度值的关系曲线;
S303、基于关系曲线,利用拍摄模组对光学系统发出的投光图像进行灰度读取,得到对应的辐照度值。
本实施例中,除了对打印机打印曝光面的均匀度校准,还可以对光学系统的辐照度进行监控,了解光学系统的老化状态。具体的,在平场校正过程中,首先对投光图像的硬件参数进行设定,硬件参数可以包括3D打印机的曝光量、拍摄模组的增益和图像处理条件等。接着固定拍摄模组的曝光量,固定的曝光量使投光图 像的灰度峰值保持在2 n-1(8位图像的灰阶则为255)以下。
在固定的曝光量下对投光图像进行灰度标定,得到取像灰度对应辐照度值的坐标点。然后可以采用辐照度值度测量装置测定不同的辐照度值,以此调节投光图像的辐照度值,再通过拍摄模组获取对应的灰度值。在经过多次调节后,可以根据调节结果(即不同的辐照度值以及对应的灰度值)对辐照度值坐标点进行拟合,生成灰度与辐照度值的关系曲线。后续在相同曝光量下,拍摄模组对投光图像进行灰度读取时,便可以依据关系曲线确定对应的辐照度值。
进一步的,光学系统的曝光面校准方法还可以包括:
获取光学系统的辐照控制参数及对应的图像信息;
获取光学系统的图像信息与辐照数据之间的第一关系;
基于第一关系、辐照控制参数及对应的图像信息,得到光学系统的辐照控制参数和辐照数据的第二关系;第二关系用于用于3D打印过程中辐照数据的调节。
其中,辐照控制参数用于调节辐照数据(如辐照的亮度、光功率等),例如可以为电流等相关参数,也可以为输入亮度、输入电压、输入电功率等。图像信息用于反应图像的特征,可以采用矩阵形式体现,例如可以为灰度、亮度等相关参数,其中灰度可以为某一区域所有像素的平均灰度值,或者是某一区域所有像素的中值、总值等。辐照数据用于表征投光图像的电磁辐射相关信息,例如可以为辐照强度、光强、照度或光功率等参数。
在一个示例中,基准光源的最大投光面积不小于光学系统的投光面积,以保证每一个分割区域能找到对应映射关系的位置。
下面以电流作为辐照控制参数,灰度作为图像信息举例,获取光学系统的辐照控制参数及对应的图像信息可以采用以下方式:向光学系统输入各预设电流值,并获取光学系统基于各预设电流值发出的投光图像的灰度值,得到与各预设电流值对应的灰度值。
获取光学系统的图像信息与辐照数据之间的第一关系,可以包括:获取基准光源的图像信息与光学系统的图像信息之间的第三关系,以及基准光源的图像信息与基准光源的辐照数据之间的第四关系;至少基于第三关系、第四关系得到第一关系;其中,第三关系满足基准光源的图像信息与光学系统的图像信息一致或存在偏差。
具体的,第三关系可以通过以下手段获取:获取光学系统的灰度与辐照数据的第五关系,以及基准光源的灰度与辐照数据的第六关系;基于第五关系和第六关系,得到基准光源的灰度与光学系统的灰度之间的第三关系。第四关系可以为映射表的形式,也可以为拟合曲线的形式,在此不做具体限定。以图像信息为灰度值举例,通过调节基准光源曝光面的辐照数据,并在每一个辐照数据下利用拍摄模组获取对应的灰度值,也即可以利用拍摄模组得到任一个辐照数据以及该辐照数据下对应的灰度值,从而可以生成基准光源的灰度值和辐照数据的关系。需要说明的是,不仅基于第三关系和第四关系可以得到上述第一关系,还可以进一步基于光学系统的初始的辐照控制参数和辐照数据的关系得到上述第一关系。更进一步的,还可以基于光学系统的其他参数间的关系得到上述第一关系。
在一个实施例中,基准光源的图像信息与光学系统的图像信息可以认定为一致。因此,基于基准光源的图像信息与光学系统的图像信息的第三关系、基准光源的图像信息与基准光源的辐照数据之间的第四关系,可以得到光学系统的图像信息与辐照数据。
在另一个实施例中,基准光源的图像信息与光学系统的图像信息存在偏差。该偏差可以采用以下方式得到:获取光学系统的灰度与辐照数据的关系,以及基准光源的灰度与辐照数据的关系;基于上述两个关系,即可得到基准光源的灰度与光学系统的灰度之间的第七关系。而基于第七关系以及第三关系、第四关系即可得到该第一关系。
为了进一步阐述上述曝光面校准方法,下面特以图像信息为灰度、辐照控制参数为电流为例,进行进一步说明。
1)基准光源的图像信息与光学系统的图像信息可以认定为一致;
调节基准光源曝光面的辐照度值,并利用拍摄模组获取对应的灰度值;
基于辐照度值和对应的灰度值,生成基准光源的灰度值与辐照度值的第八关系;
向光学系统输入各预设电流值,并获取光学系统基于各预设电流值发出的投光图像的灰度值,得到与各预设电流值对应的灰度值。调节基准光源曝光面的辐照数据,并利用拍摄模组获取对应的灰度值;
采用第八关系处理对应的灰度值,得到与各预设电流值对应的辐照数据;
处理各预设电流值以及与各预设电流值对应的辐照数据,得到光学系统的电流值与辐照数据的第九关系;其中,第九关系用于3D打印过程中辐照数据的调节。
其中,辐照数据可以为辐照强度、功率、光强及相关其他值中的一种。
具体的,通过调节基准光源曝光面的辐照数据,并在每一个辐照数据下利用拍摄模组获取对应的灰度值,也即可以利用拍摄模组得到任一个辐照数据以及该辐照数据下对应的灰度值,从而可以生成灰度值和辐照数据的第八关系。其中,第八关系可以为映射表的形式,也可以为拟合曲线的形式,在此不做具体限定。可以采用本领域任意一种拟合算法对辐照数据和对应的灰度值进行拟合处理,如最小二乘法、多项式拟合算法和三次样条拟合算法中的任意一种。向光学系统输入各预设电流值,预设电流值可以根据光学系统确定。在本方案,输送给光学系统的电流值可以由上位机控制,而电流的大小影响投光图像的辐照数据和灰度。光学系统在各预设电流值下,投射出不同灰度值的投光图像,通过采用拍摄模组进行灰度读取,即可得到光学系统在各预设电流值下,投射的投光图像的灰度值。而根据灰度值与辐照数据的第八关系,则可以得到各预设电流值对应的辐照数据。第九关系可以为映射表的形式,也可以为拟合曲线的形式,在此不做具体限定。处理各预设电流值以及与各预设电流值对应的辐照数据的方式可以根据第九关系的形式而确定。在一个具体示例中,可以通过拟合算法对预设电流值和对应的辐照数据进行拟合,得到预设电流值和辐照数据的拟合曲线关系。在得到第九关系之后,即可采用该第九关系进行3D打印,此时的光学系统的辐照数据校准完毕。在一个具体示例中,调节基准光源曝光面的辐照数据之前,还包括:限定拍摄模组的曝光量。
需要说明的是,3D打印中对于每一层的图像所要求的的辐照数据可以为不同的。因此在打印过程中,辐照数据需要根据需求进行改变。而随着光学系统的使用,同一电流下其投光图像的辐照数据会发生变化,所以需要对光学系统进行辐照数据的校准。在传统技术中,常采用光功率计通过人为手动进行逐点采集辐照数据。在需要进行大幅面打印时,往往需要长时间地进行辐照数据的采样,不仅效率低下而且需要采用成本高光功率计。而本申请通过对光学系统进行光均匀性校准以及辐照数据的校准,能够不需要光功率计即可自动化快速对光学系统进行校准,同时也节约了人力成本。
2)基准光源的图像信息与光学系统的图像信息可以存在偏差;
获取光学系统的灰度与辐照数据的第十关系,以及基准光源的灰度与辐照数据的第十一关系;
基于第十关系和第十一关系,得到基准光源的灰度与光学系统的灰度之间的第十二关系;
向光学系统输入各预设电流值,并对光学系统基于各预设电流值发出的投光图像进行灰度读取,得到与各预设电流值对应的灰度值;
采用第十二关系处理各对应的灰度值,得到基准光源的补偿灰度值;
采用第十关系处理各补偿灰度值,得到与各预设电流值对应的辐照数据;
处理各预设电流值以及对应的辐照数据,得到光学系统的电流值与辐照数据的第十三关系;其中,第十三关系用于3D打印过程中辐照数据的调节。
其中,辐照数据可以为辐照强度、功率及相关其他值中的一种。
具体的,第十关系和第十一关系可以预先测量并存储在存储单元中,在需要使用时直接调用即可,也可以采用如第八关系的获取方式得到。以第十关系为拟合曲线关系为例,可以在光学系统出厂前测量得到光学系统的灰度值和辐照数据并存储于存储单元中,并通过拟合算法对灰度值和辐照数据进行拟合,得到上述第十关系。在确定第十关系和第十一关系之后,即可基准光源的灰度值与光学系统的灰度值之间的关系。光学系统在各预设电流值下,投射出不同灰度值的投光图像,通过采用拍摄模组进行灰度读取,即可得到光学系统在各预设电流值下,投射的投光图像的灰度值。光学系统在各预设电流值下,投射出不同灰度值的投光图像,通过采用拍摄模组进行灰度读取得到光学系统中对应的灰度值。而根据第十二关系和光学系统对应的灰度值,可以得到基准光源中的补偿灰度值,并进一步基于第十一关系确定各预设电流值对应的辐照数据。由此,可以得到各预设电流值对应的辐照数据。基于各预设电流值对应的辐照数据,可以得到上述第十三关系。需要说明的是,上述第十关系、第十一关系、第十二关系和第十三关系均可以采用映射表的形式或拟合曲线的关系。在得到第六关系之后,即可采用该第六关系进行3D打印,此时的光学系统即辐照数据校准完毕。上述方法通过获取基准光源和光学系统的灰度关系,解决了统一辐照数据下基准光源和光学系统间灰度的差异,能够进一步提高辐照数据的校准的准确度。
在优选实施例中,步骤S301和S302可在平场校正的过程中进行。
在一实施例中,步骤S202可以包括:
调节拍摄模组的曝光量,使拍摄得到的灰度分布图像在灰度最大值以下;其中,灰度最大值为2 n-1,其中n为图像位数,8位图像中灰度最大值则为255。
本实施例中,经过平场校正后的拍摄模组拍摄成像面,通过调节拍摄模组的曝光量,使整体拍摄的成像面的灰度值在灰度最大值(若灰度图像显示的灰阶为0~255,灰度最大值为255)以下,可得到成像面真实的均匀度分布,即对应的像素级灰度分布图像。
在另一实施例中,步骤S202可以包括:
获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像,包括:
获取第一灰度分布图像和第二灰度分布图像;其中,第一灰度分布图像为对光学系统投射的第一图像拍摄得到;第二灰度分布图像为对光学系统投射的第二图像拍摄得到;第一图像中的第一灰阶区域对应于第二图像中的第二灰阶区域;第一图像中的第二灰阶区域对应于第二图像中的第一灰阶区域;
处理第一灰度分布图像和第二灰度分布图像,得到灰度分布图像。
其中,第一灰阶区域和第二灰阶区域在灰阶上相对应,以8位图像举例,第一灰阶区域可以为白色区域(即255灰阶),第二灰阶区域可以为黑色区域(即0灰阶)。
具体的,第一图像中的白色区域对应于第二图像中的黑色区域,也即在第一图像中任一位置为白色区域,则在第二图像中的相应位置为黑色区域。第一图像中的黑色区域对应于第二图像中的白色区域,也即在第一图像中任一位置为黑色区域,则在第二图像中的相应位置为白色区域。采用拍摄模组对第一图像拍摄,并得到第一灰度分布图像,采用拍摄模组对第二图像拍摄并得到第二灰度分布图像。将第一灰度分布图像和第二灰度分布图像进行叠加即可得到对光学系统的曝光面拍摄生成的灰度分布图像。
上述方法通过两次投图并得到第一灰度分布图像和第二灰度分布图像,能够使得中心和边缘的灰度差缩小,提高灰度分布图像中灰度值的精度。
第一图像中的第一灰阶区域与第一图像中的第二灰阶区域间隔设置;第二图像中的第一灰阶区域与第一图像中的第二灰阶区域间隔设置;第一图像中的第一灰阶区域为圆形或方形;第二图像中的第一灰阶区域为圆形或方形,也即第一图像和第二图像可以为棋盘格状,也可以为均匀分布的圆点图。以第一图像和第二图像为圆点图为例,第一图像可以包括若干个投光区域,投光区域可以为大小相同的方格。而各圆点设置在投光区域内,圆点的直径可以预先设置。若在第一图像中的任一投光区域中存在白色圆点,与该任一投光区域的相邻区域则不存在该白色圆点。而在第二图像中,该任一投光区域中不存在白色圆点,与该任一投光区域的相邻区域则存在该白色圆点。将第一图像和第二图像进行叠加,得到的图像为各投光区域中均存在白色圆点的图像。
在一个实施例中,拍摄模组的镜头上还设置有滤光片,用于滤除环境光对拍摄模组的影响。
在一实施例中,利用拟合算法计算每一分割区域的拟合灰度值,拟合算法为最小二乘法、多项式拟合算法或者三次样条拟合算法。
本实施例中,在利用拟合算法计算每一分割区域的拟合灰度值,可以采用最小二乘法、多项式拟合算法或者三次样条拟合算法,又或者是其他拟合算法。
在将灰度分布图像分割为包含多个分割区域的网格图像时,可以具体分割为m行、n列,形成m*n的网格分布。
在一实施例中,如图4所示,步骤S204可以包括:步骤S401~S403。
S401、将所有拟合灰度值按照对应分割区域的顺序依次标记为P 11、P 12、……、P mn,得到项数为m*n的
Figure PCTCN2022107529-appb-000001
的灰度数组;
S402、在灰度模组中选取最小值P min作为最小拟合灰度值,并将最小拟合灰度值与灰度数组中的其他数据进行归一化比值计算,得到一比值矩阵;
S403、将比值矩阵中包含的比值作为对应的灰度补偿系数,然后将需求图像灰度值与比值矩阵中的各比值相乘,得到对应的数字掩膜。
本实施例中,选取网格图像内最小灰度拟合值作为基准,与其他拟合灰度值进行比较,可得到灰度补偿系数。在网格图像中的灰度补偿系数可以组成一张数字掩膜,此后每一投光图像经过该数字掩膜的掩膜补偿后,便可以得到曝光面具备均匀辐照度值的打印图像。
具体地,设定网格图像内第一格的拟合灰度值为P 11,以此类推,最后一格的拟合灰度值为P mn,可以得到项数为m*n的
Figure PCTCN2022107529-appb-000002
的灰度数组。从灰度数组中选取最小值P min,与灰度数组中的其余拟合灰度值做归一化比值
Figure PCTCN2022107529-appb-000003
生成对应的比值矩阵,该比值矩阵中的比值即为补偿系数。需求图像灰度值即使用者所需投影的图像的灰度值,当设定的需求图像灰度值为a时,将a与比值矩阵相乘,即 可得到数字掩膜。
在本申请的另一些实施例中,还提供了一种用于3D打印的校准测量方法。校准测量方法除了使用根据本申请的一些实施方式的光学系统的曝光面校准方法对光学系统的曝光面进行校准之外,还包括多个测量步骤,通过对光学系统的多个参数进行测量,可以判断光学系统的各参数在3D打印过程中是符合要求的,使得能产生清晰的、与物完全相似的像,从而实现更精准高效的3D打印。
下面结合图7进行详细描述,图7为本申请实施例提供的一种用于3D打印的校准测量方法的流程示意图,该用于3D打印的校准测量方法可以包括:使用根据本发明的一些实施方式的光学系统的曝光面校准方法100对光学系统的曝光面进行校准;以及如下的校准测量方法步骤S701至S705。根据本发明的一些实施方式的光学系统的曝光面校准方法100参考图1至图6所进行的详细描述,为了简洁器件,这里不再赘述,下面具体描述校准测量方法步骤S701至S705。
S701、测量光学系统的静态对比度和动态对比度;利用以上辐照度值测试方法,分别提取对比度图像中各采样点的辐照度值,根据静态对比度和动态对比度的计算方法输出计算的数值。
S702、测量光学系统的清晰度;相机模组对曝光系统显示的分辨率测试图像取图,利用CTF和MTF图像算法,计算所需求空间分辨率的数值,判定曝光系统的清晰度。对于有电动调焦系统的曝光镜头,还可做对焦调节反馈。
S703、对光机设备是否存在脏污进行检测;相机模组使用以上步骤提取曝光面的灰度分布,根据灰度的连续性分布,如果有一片区域灰度产生突变且低于预设的阈值,判定该区域有脏污;
S704、测量曝光面图像的尺寸。相机模组使用的不同高度进行标定,根据不同物距的放大倍率恒定的原理,可测试曝光面的各种图像的尺寸和相对分布。
应该理解,步骤S701至S705的顺序仅仅是示例性的,这些步骤可以以不同的顺序执行,以实现各种参数的测量,本申请对此不做限制。
具体地,在步骤S701中,测量光学系统的静态对比度和动态对比度具体包括下述步骤:获取白色图像对应的第一辐照度值以及黑色图像对应的第二辐照度值;其中,白色图像和黑色图像均为经校准后的光学系统投射得到;
根据第一辐照度值和第二辐照度值,得到光学系统的静态对比度;
获取棋盘格图中各区域的辐照度值;其中,棋盘格图为经校准后的光学系统投射得到;
利用ANSI对比度计算方法处理各区域的辐照度值,得到光学系统的动态对比度。
在一些实施例中,可以基于根据本申请的曝光面校准技术测量静态对比度,具体包括下述步骤。首先在曝光面上使用导光膜接收光学系统投出或显示的全幅面白图的图像,即投光图像,并设定使用并设定使用n位图像中的2 n-1全阶白图,以8位图像为例,可以设定0~255灰阶中的255全阶纯白色图。使用光学系统的辐照度值测量装置对上述纯白色图的辐照度值进行测量,从而获得第一辐照度值。然后,在曝光面上使用导光膜接收光学系统投出或显示的全幅面纯黑色图的图像,即投光图像,并设定使用0阶纯黑色图。使用光学系统的辐照度值测量装置对上述纯黑色图的辐照度值进行测量,从而获得第二辐照度值。然后,通过对比度测量装置根据第一辐照度值与第二辐照度值的比率来计算光学系统的静态对比度。
在一些实施例中,可以基于根据本申请的曝光面校准技术测量动态对比度,具体包括下述步骤。首先在曝光面上使用导光膜接收光学系统投出或显示棋盘格图,使用光学系统的辐照度值测量装置依次对上述棋盘格图的每个点处的辐照度值进行测量,从而获得棋盘格中每个点处的辐照度值。然后,通过对比度测量装置利用ANSI对比度的计算方法来计算光学系统的动态对比度的数值。在该实施例中,辐照度值的测量采用机器视觉的方式。
在一些实施方式中,基于本申请提供的曝光面校准技术来获取辐照度值,具体包括下述步骤。首先,在固定曝光量下对投光图像进行灰度标定,得到取像灰度对应辐照度值的坐标点;在辐照设备的输出辐照度值更改后,使用辐照度值测量装置测定不同的辐照度值,然后使用取像模组获取该图像的灰度值,对不同辐照度值进行多次测量后,对坐标点进行拟合,并且根据拟合后的坐标点生成灰度/辐照度值的曲线。因此,在相同曝光量下取像模组对投光图像进行灰度读取,就可以将该灰度值转化为对应的辐照度值。
在步骤S702中,测量投射图像的清晰度具体包括下述步骤:控制经校准后的光学系统向投光幅面上的预设位置投射图像;图像包括至少一条弧矢方向的线条和至少一条子午方向的线条;获取投射图像的实际灰度分布曲线,并根据实际灰度分布曲线和预设灰度分布曲线确认各预设位置对应的CTF值或MTF值;根据各预设位置对应的CTF值或MTF,确定光学系统的清晰度。
其中,预设位置可以包括投光幅面的一个中心位置和四个边角位置。CTF(Contrast Transfer Function,对比度传递函数)值可以采用本领域常用手段进行计算。MTF(Modulation Transfer Function,调制传递函数)值也可以采用本领域常用手段进行计算。
具体地,根据各预设位置对应的CTF值或MTF值,确定光学系统的清晰度,包括:若存在任一CTF值小于第一设定值,或任一MTF值小于第二设定值,则确定光学系统的清晰度不合格。根据CTF值判定清晰度,如果计算得到的CTF值小于设定值,则判定该点不清晰。也可以根据MTF值判定清晰度,如果计算得到的MTF值小于设定值,则判定该点不清晰。若存在不清晰的点,则认为光机设备的镜头的清晰度差。进一步的,在弧矢方向的线条为多条,子午方向的线条为多条的情况下,弧矢方向的线条之间的间距可以为N个像素,用于判断子午方向是否模糊。子午方向的线条之间的间距可以为N个像素等,用于判断弧矢方向是否模糊。线条的宽度可以为N个像素,其中,N为正整数。
在步骤S703中,对光机设备是否存在脏污进行检测具体包括:在实际灰度分布曲线上任意点的值低于下限值,和/或实际灰度分布曲线存在突变曲线,则确定光学系统上存在脏污。
在于一些实施方式中,本申请的光学系统通过下述步骤来进行脏污检测。通过设定灰度分布的下限值,光机设备的灰度分布在一个幅面中一般为连续变化的。如果实际灰度低于下限值和/或光机设备的实际灰度分布曲线存在突变,则认为存在脏污。
在步骤S704中,测量拍摄物体的尺寸具体包括:标定到相机模组中拍摄面上的每个像素对应的大小,并根据拍摄物体的边长所占像素的数量确定拍摄物体的尺寸;和/或,
获取相机模组中拍摄面的尺寸,根据拍摄物体的边长所占拍摄面的比例,确定拍摄物体的尺寸。
具体地,本申请的光学系统的尺寸测量装置通过下述方法来进行尺寸测量。在一些实施方式中,预先标定到相机中拍摄面上的每个像素对应的大小,然而根据相机中拍摄物体的边长所占像素的数量,使用光学系统的尺寸测量装置确定物体的尺寸。在另外的一些实施方式中,获取相机拍摄面的尺寸,根据拍摄物体的边长所占拍摄面的比例,使用光学系统的尺寸测量装置确定物体的尺寸。
在一些实施方式中,本申请的光学系统在不同的检测项目中需要使用不同的相机模组。本申请的光学系统能够针对不同的检测项目选定对应的相机模组并执行的对应的检测流程,从而实现自动化检测。也即确定当前的检测项目,并根据当前的检测项目选定对应的相机模组后执行上述步骤S701-S704。
图8为本实施例提供的一种光学系统的曝光面校准装置800的示意性框图,该装置800包括:
图像获取单元802,被配置成用于获取由拍摄模组光学系统的曝光面拍摄生成的灰度分布图像;
拟合单元803,被配置成用于将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
选取单元804,被配置成用于在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
掩膜补偿单元805,被配置成用于利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
在一实施例中,该装置800还包括平场校正单元801,被配置成用于利用基准光源对拍摄模组进行平场校正。
在一实施例中,平场校正单元801可以包括:
投影单元,被配置成用于基准光源根据预设灰度值,投影出具备均匀辐照度值的曝光面;
曝光面拍摄单元,被配置成用于拍摄模组对基准光源的曝光面拍摄,得到基准光源图像;
数据获取单元,被配置成用于根据基准光源图像,获取感光芯片中各像素单元的灰度输出值,将基准光源的预设灰度值和各像素单元的灰度输出值对比,得到各像素单元的灰度校正系数;
系数校正单元,被配置成用于根据各像素单元的灰度校正系数对拍摄模组进行平场校正。
在一实施例中,曝光面拍摄单元可以包括:
取向面划分单元,被配置成用于基于基准光源的曝光面尺寸,将拍摄模组的取像面划分为若干个取像子区域;
移动投影单元,被配置成用于移动基准光源,基准光源分别对各取像子区域投影,得到若干个对应于各取像子区域的基准光源子图像;
图像拼接单元,被配置成用于将若干个基准光源子图像拼接,得到基准光源图像。
在一实施例中,如图9所示,光学系统的曝光面校准装置800还包括:
限定单元901,被配置成用于限定拍摄模组的曝光量;
曲线生成单元902,被配置成用于调节基准光源曝光面的辐照度值,并利用拍摄模组获取对应的灰度值,以此拟合生成灰度与辐照度值的关系曲线;
灰度读取单元903,被配置成用于基于关系曲线,利用拍摄模组对光学系统发出的投光图像进行灰度读取,得到对应的辐照度值。
在一实施例中,图像获取单元802可以包括:
调节单元,被配置成用于调节拍摄模组的曝光量,使拍摄得到的灰度分布图像在灰度最大值以下。
在一实施例中,利用拟合算法计算每一分割区域的拟合灰度值,拟合算法为最小二乘法、多项式拟合算法或者三次样条拟合算法。
在一实施例中,如图10所示,选取单元804可以包括:
区域标记单元1001,被配置成用于将所有拟合灰度值按照对应分割区域的顺序依次标记为P 11、P 12、……、P mn,得到项数为m*n的
Figure PCTCN2022107529-appb-000004
的灰度数组;
计算单元1002,被配置成用于在灰度模组中选取最小值P min作为最小拟合灰度值,并将最小拟合灰度值与灰度数组中的其他数据进行归一化比值计算,得到一比值矩阵;
相乘单元1003,被配置成用于将比值矩阵中包含的比值作为对应的灰度补偿系数,然后将预设图像灰度值与比值矩阵中的各比值相乘,得到对应的数字掩膜。
如前,本申请的光学系统是能产生清晰的、与物完全相似的像的成像系统。在一些实施方式中,本申请的光学系统还可以包括一个或更多个附加的装置来进行多个测量步骤,通过对光学系统的多个参数进行测量,可以判断光学系统的各参数在3D打印过程中是符合要求的,使得能产生清晰的、与物完全相似的像,从而实现更精准高效的3D打印。
下面结合图11进行详细描述,图11为本申请实施例提供的一种光学系统的示意性框图,该光学系统1100除了根据图8所描述的曝光面校准装置800之外,还包括:
对比度测量装置1101,被配置成用于测量光学系统的静态对比度和动态对比度;
清晰度测量装置1102,被配置成用于测量投射图像的清晰度;
脏污测量装置1103,被配置成用于对光机设备是否存在脏污进行检测;
尺寸测量装置1104,被配置成用于测量拍摄物体的尺寸。
本申请的光学系统能够通过对比度测量装置1101对光学系统进行静态对比度和动态对比度的测量。
对比度测量装置1101可以被配置成用于获取白色图像对应的第一辐照度值以及黑色图像对应的第二辐照度值;其中,白色图像和黑色图像均为经校准后的光学系统投射得到;根据第一辐照度值和第二辐照度值,得到光学系统的静态对比度;获取棋盘格图中各区域的辐照度值;其中,棋盘格图为经校准后的光学系统投射得到;利用ANSI对比度计算方法处理各区域的辐照度值,得到光学系统的动态对比度。
在一些实施方式中,本申请的光学系统还包括清晰度测量装置1102,清晰度测量装置1102用于对光学系统的清晰度进行测量。
清晰度测量装置1102可以被配置成用于控制经校准后的光学系统向投光幅面上的预设位置投射图像;图像包括至少一条弧矢方向的线条和至少一条子午方向的线条;获取投射图像的实际灰度分布曲线,并根据实际灰度分布曲线和预设灰度分布曲线确认各预设位置对应的CTF值;根据各预设位置对应的CTF值,确定光学系统的清晰度。还用于若存在任一CTF值小于设定值,则确定光学系统的清晰度不合格。
在一些实施方式中,本申请的光学系统还包括脏污测量装置1103,脏污测量装置1103能够对光机设备是否存在脏污进行检测。
脏污测量装置1103被配置成用于在实际灰度分布曲线上任意点的值低于下限值,和/或实际灰度分布曲线存在突变曲线,则确定光学系统上存在脏污。
在一些实施方式中,本申请的光学系统还包括尺寸测量装置1104,尺寸测量装置1104用于对物体的尺寸进行测量,从而能够实现更精准的3D打印。
尺寸测量装置1104被配置成用于标定到相机模组中拍摄面上的每个像素对应的大小,并根据拍摄物体的边长所占像素的数量确定拍摄物体的尺寸;和/或,获取相机模组中拍摄面的尺寸,根据拍摄物体的边长所占拍摄面的比例,确定拍摄物体的尺寸。
在一些实施方式中,本申请的光学系统通过对比度测量装置1101、清晰度测量装置1102、脏污测量装置1103和/或尺寸测量装置1104来进行不同检测项目。在不同的检测项目中需要使用不同的相机模组。本申请的光学系统能够针对不同的检测项目选定对应的相机模组并执行的对应的检测流程,从而实现自动化检测。
由于装置部分的实施例与方法部分的实施例相互对应,因此装置部分的实施例请参见方法部分的实施例 的描述,这里暂不赘述。
本申请实施例还提供了一种计算机可读存储介质,其上存有计算机程序,该计算机程序被执行时可以实现上述实施例所提供的步骤。该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例还提供了一种计算机设备,可以包括存储器和处理器,存储器中存有计算机程序,处理器调用存储器中的计算机程序时,可以实现上述实施例所提供的步骤。当然计算机设备还可以包括各种网络接口,电源等组件。
本申请的光学系统优选为3D打印机的光学系统,包括DLP(Digital Light Processing,数字光处理)光机设备、或包含LCD(Liquid Crystal Display,液晶显示器)的光机设备、或包含LCOS(Liquid Crystal on Silicon,硅基液晶)的光机设备、或包含OLED(Organic Light-Emitting Diode,有机发光二极管)、Micro-LED、Mini-LED、液晶投影等等的光机设备。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的状况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
工业实用性
本申请提供了一种光学系统的曝光面校准方法、装置、计算机设备及存储介质,该方法包括:利用基准光源对拍摄模组进行平场校正;获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;将灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;利用数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。本申请通过获取曝光面的像素级分布,并进行灰度值的对应补偿,可提高对于光学系统曝光面的校准精度和校准效率。本申请还公开了一种用于3D打印的校准测量方法。用于3D打印的校准测量方法通过对光学系统的多个参数进行测量,可以判断光学系统的各参数在3D打印过程中是符合要求的,使得能产生清晰的、与物完全相似的像,从而实现更精准高效的3D打印。
此外,可以理解的是,本申请的光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质是可以重现的,并且可以用在多种工业应用中。例如,本申请的光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质可以用于光学系统的技术领域。

Claims (27)

  1. 一种光学系统的曝光面校准方法,其特征在于,包括:
    利用基准光源对拍摄模组进行平场校正;
    获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;
    将所述灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
    在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,并根据所述基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
    利用所述数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
  2. 根据权利要求1所述的光学系统的曝光面校准方法,其特征在于,在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,包括:在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值。
  3. 根据权利要求2所述的光学系统的曝光面校准方法,其特征在于,
    获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像,包括:
    获取第一灰度分布图像和第二灰度分布图像;其中,所述第一灰度分布图像为对所述光学系统投射的第一图像拍摄得到;所述第二灰度分布图像为对所述光学系统投射的第二图像拍摄得到;所述第一图像中的第一灰阶区域对应于所述第二图像中的第二灰阶区域;所述第一图像中的第二灰阶区域对应于所述第二图像中的第一灰阶区域;
    处理所述第一灰度分布图像和所述第二灰度分布图像,得到所述灰度分布图像。
  4. 根据权利要求3所述的光学系统的曝光面校准方法,其特征在于,所述第一图像中的第一灰阶区域与所述第一图像中的第二灰阶区域间隔设置;所述第二图像中的第一灰阶区域与所述第一图像中的第二灰阶区域间隔设置;所述第一图像中的第一灰阶区域为圆形或方形;所述第二图像中的第一灰阶区域为圆形或方形。
  5. 根据权利要求4所述的光学系统的曝光面校准方法,其特征在于,所述第一灰阶区域为白色区域;所述第二灰阶区域为黑色区域。
  6. 根据权利要求2所述的光学系统的曝光面校准方法,其特征在于,所述拍摄模组的镜头上设置有滤光片。
  7. 根据权利要求1至6任一项所述的光学系统的曝光面校准方法,其特征在于,所述利用基准光源对拍摄模组进行平场校正,包括:
    基准光源投影出具备均匀辐照度值的曝光面;
    拍摄模组对基准光源的曝光面拍摄,得到基准光源图像;
    根据所述基准光源图像,获取感光芯片中各像素单元的灰度输出值,将基准光源的预设灰度值和各像素单元的灰度输出值对比,得到各像素单元的灰度校正系数;
    根据各像素单元的灰度校正系数对拍摄模组进行平场校正。
  8. 根据权利要求7所述的光学系统的曝光面校准方法,其特征在于,所拍摄模组对基准光源的曝光面拍摄,得到基准光源图像,包括:
    基于基准光源的曝光面尺寸,将拍摄模组的取像面划分为若干个取像子区域;
    移动基准光源,基准光源分别对各取像子区域投影,得到若干个对应于各取像子区域的基准光源子图像;
    将所述若干个基准光源子图像拼接,得到所述基准光源图像。
  9. 根据权利要求1至8中任一项所述的光学系统的曝光面校准方法,其特征在于,还包括:
    限定所述拍摄模组的曝光量;
    调节基准光源曝光面的辐照度值,并利用拍摄模组获取对应的灰度值,以此拟合生成灰度与辐照度值的关系曲线;
    基于所述关系曲线,利用拍摄模组对光学系统发出的投光图像进行灰度读取,得到对应的辐照度值。
  10. 根据权利要求1至8中任一项所述的光学系统的曝光面校准方法,其特征在于,还包括:
    获取光学系统的辐照控制参数及对应的图像信息;
    获取所述光学系统的图像信息与辐照数据之间的第一关系;
    基于所述第一关系、所述辐照控制参数及所述对应的图像信息,得到所述光学系统的辐照控制参数和辐照数据的第二关系;所述第二关系用于3D打印过程中辐照数据的调节。
  11. 根据权利要求10所述的光学系统的曝光面校准方法,其特征在于,获取所述光学系统的图像信息与辐照数据之间的第一关系,包括:
    获取所述基准光源的图像信息与所述光学系统的图像信息之间的第三关系,以及所述基准光源的图像信息与所述基准光源的辐照数据之间的第四关系;
    至少基于所述第三关系、第四关系,得到所述第一关系;
    其中,所述第三关系满足所述基准光源的图像信息与所述光学系统的图像信息一致或存在偏差。
  12. 根据权利要求1至11中任一项所述的光学系统的曝光面校准方法,其特征在于,所述获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像,包括:
    调节拍摄模组的曝光量,使拍摄得到的灰度分布图像在灰度最大值以下。
  13. 根据权利要求1至10中任一项所述的光学系统的曝光面校准方法,其特征在于,利用拟合算法计算每一分割区域的拟合灰度值,所述拟合算法为最小二乘法、多项式拟合算法或者三次样条拟合算法。
  14. 根据权利要求2所述的光学系统的曝光面校准方法,其特征在于,所述在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值,并根据所述基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜,包括:
    将所有拟合灰度值按照对应分割区域的顺序依次标记为P 11、P 12、……、P mn,得到项数为m*n的
    Figure PCTCN2022107529-appb-100001
    的灰度数组;
    在所述灰度模组中选取最小值P min作为最小拟合灰度值,并将最小拟合灰度值与灰度数组中的其他数据进行归一化比值计算,得到比值矩阵;
    将所述比值矩阵中包含的比值作为对应的灰度补偿系数,然后将需求图像灰度值与所述比值矩阵中的各比值相乘,得到对应的数字掩膜。
  15. 一种用于3D打印的校准测量方法,其特征在于,所述校准测量方法包括:采用如权利要求1至14任一项所述的光学系统的曝光面校准方法对光学系统进行校准;
    获取白色图像对应的第一辐照度值以及黑色图像对应的第二辐照度值;其中,所述白色图像和所述黑色图像均为经校准后的光学系统投射得到;
    根据所述第一辐照度值和所述第二辐照度值,得到所述光学系统的静态对比度;
    获取棋盘格图中各区域的辐照度值;其中,所述棋盘格图为经校准后的光学系统投射得到;
    利用ANSI对比度计算方法处理所述各区域的辐照度值,得到所述光学系统的动态对比度。
  16. 根据权利要求15所述的用于3D打印的校准测量方法,其特征在于,所述校准测量方法还包括:
    控制经校准后的光学系统向投光幅面上的预设位置投射图像;所述图像包括至少一条弧矢方向的线条和至少一条子午方向的线条;
    获取投射图像的实际灰度分布曲线,并根据所述实际灰度分布曲线和预设灰度分布曲线确认各所述预设位置对应的CTF值或MTF值;
    根据各所述预设位置对应的CTF值或MTF值,确定光学系统的清晰度。
  17. 根据权利要求16所述的用于3D打印的校准测量方法,其特征在于,根据各所述预设位置对应的CTF值或MTF值,确定光学系统的清晰度,包括:
    若存在任一所述CTF值小于第一设定值,或存在任一所述MTF值小于第二设定值,则确定所述光学系统的清晰度不合格。
  18. 根据权利要求15或16所述的用于3D打印的校准测量方法,其特征在于,所述校准测量方法还包括:
    在所述实际灰度分布曲线上任意点的值低于下限值,和/或所述实际灰度分布曲线存在突变曲线,则确定所述光学系统上存在脏污。
  19. 根据权利要求15至18中任一项所述的用于3D打印的校准测量方法,其特征在于,所述校准测量方法还包括:
    标定到相机模组中拍摄面上的每个像素对应的大小,并根据拍摄物体的边长所占像素的数量确定拍摄物体的尺寸;和/或,
    获取所述相机模组中拍摄面的尺寸,根据拍摄物体的边长所占拍摄面的比例,确定拍摄物体的尺寸。
  20. 根据权利要求15至19中任一项所述的用于3D打印的校准测量方法,其特征在于,还包括:
    确定当前的检测项目,并根据所述当前的检测项目选定对应的相机模组。
  21. 一种光学系统的曝光面校准方法,其特征在于,包括:
    获取由拍摄模组对光学系统的曝光面拍摄生成的图像信息分布图像;
    将所述图像信息分布图像进行分割处理,并计算每一分割区域的映射图像信息值;
    从各所述映射图像信息值中选取基准映射图像信息值,并根据所述基准映射图像信息值计算其他分割区域对应的补偿参数;所述补偿参数用于对光学系统发出的投光图像进行掩膜补偿,并得到曝光面具备均匀辐照度值的打印图像。
  22. 一种光学系统的曝光面校准装置,其特征在于,包括:
    图像获取单元,被配置成用于获取由拍摄模组光学系统的曝光面拍摄生成的灰度分布图像;
    拟合单元,被配置成用于将所述灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;
    选取单元,被配置成用于在计算得到的所有拟合灰度值中选取预定拟合灰度值为基准灰度值,并根据所述基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;
    掩膜补偿单元,被配置成用于利用所述数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
  23. 根据权利要求22所述的光学系统的曝光面校准装置,其特征在于,所述选取单元还配置成用于在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值。
  24. 根据权利要求22或23所述的光学系统的曝光面校准装置,其特征在于,还包括:
    平场校正单元,被配置成用于利用基准光源对拍摄模组进行平场校正。
  25. 一种计算机设备,其特征在于,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现根据权利要求1至14、21中任一项所述的光学系统的曝光面校准方法以及根据权利要求15至20中任一项所述的用于3D打印的校准测量方法。
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至14、21中任一项所述的光学系统的曝光面校准方法以及根据权利要求15至20中任一项所述的用于3D打印的校准测量方法。
  27. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至14、21中任一项所述的方法以及根据权利要求15至20中任一项所述的用于3D打印的校准测量方法。
PCT/CN2022/107529 2021-07-22 2022-07-22 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 WO2023001306A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22845469.0A EP4343682A1 (en) 2021-07-22 2022-07-22 Exposure surface calibration method and apparatus for optical system, calibration measurement method and apparatus, computer device, and storage medium
AU2022314858A AU2022314858A1 (en) 2021-07-22 2022-07-22 Method and Apparatus for Calibrating Exposure Surface of Optical System, Calibration Measurement Method and Apparatus, and Computer Device and Storage Medium
US18/393,477 US20240131793A1 (en) 2021-07-22 2023-12-21 Method and Apparatus for Calibrating Exposure Surface of Optical System, Calibration Measurement Method and Apparatus, and Computer Device and Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110832043.6A CN113469918B (zh) 2021-07-22 2021-07-22 光学系统的曝光面校准方法、装置、计算机设备及存储介质
CN202110832043.6 2021-07-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/393,477 Continuation-In-Part US20240131793A1 (en) 2021-07-22 2023-12-21 Method and Apparatus for Calibrating Exposure Surface of Optical System, Calibration Measurement Method and Apparatus, and Computer Device and Storage Medium

Publications (1)

Publication Number Publication Date
WO2023001306A1 true WO2023001306A1 (zh) 2023-01-26

Family

ID=77881983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107529 WO2023001306A1 (zh) 2021-07-22 2022-07-22 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质

Country Status (5)

Country Link
US (1) US20240131793A1 (zh)
EP (1) EP4343682A1 (zh)
CN (1) CN113469918B (zh)
AU (1) AU2022314858A1 (zh)
WO (1) WO2023001306A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777999A (zh) * 2023-06-28 2023-09-19 深圳市度申科技有限公司 面阵相机多适应性高级平场校正方法
CN117132589A (zh) * 2023-10-23 2023-11-28 深圳明锐理想科技有限公司 一种条纹图校正方法、光学检测设备及存储介质
CN117252141A (zh) * 2023-11-13 2023-12-19 西安芯瑞微电子信息技术有限公司 一种流体力学求解器电路热仿真方法、装置及存储介质
CN117793539A (zh) * 2024-02-26 2024-03-29 浙江双元科技股份有限公司 一种基于可变周期的图像获取方法及光学传感装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469918B (zh) * 2021-07-22 2024-02-02 广州黑格智造信息科技有限公司 光学系统的曝光面校准方法、装置、计算机设备及存储介质
CN114281274A (zh) * 2021-11-30 2022-04-05 深圳市纵维立方科技有限公司 光亮度均匀性的调节方法、打印方法、打印系统及设备
CN114559653B (zh) * 2022-01-07 2024-01-19 宁波智造数字科技有限公司 利用立方体矩阵的光固化3d打印均匀度调整方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090244329A1 (en) * 2006-08-11 2009-10-01 Nikkon Corporation Digital camera and image processing computer program product
CN209176181U (zh) * 2018-12-13 2019-07-30 苏州博理新材料科技有限公司 光固化3d打印机投影仪光照校正装置
CN113034382A (zh) * 2021-02-23 2021-06-25 深圳市创想三维科技有限公司 亮度均匀度调节方法、装置、计算机设备和可读存储介质
CN113103587A (zh) * 2021-04-16 2021-07-13 上海联泰科技股份有限公司 3d打印的控制方法、控制系统及3d打印设备
CN113469918A (zh) * 2021-07-22 2021-10-01 广州黑格智造信息科技有限公司 光学系统的曝光面校准方法、装置、计算机设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105137720A (zh) * 2015-09-18 2015-12-09 中国科学院光电技术研究所 基于数字微镜阵列制作不同深度的多台阶光栅的无掩模光刻机
CN106127842B (zh) * 2016-06-15 2018-11-02 北京工业大学 一种结合光源分布与反射特性的面曝光3d打印的方法及系统
CN106228598B (zh) * 2016-07-25 2018-11-13 北京工业大学 一种面向面曝光3d打印的模型自适应光照均匀化方法
CN112848301B (zh) * 2021-01-26 2024-02-23 深圳市创必得科技有限公司 Lcd光固化3d打印均光优化补偿方法与装置
CN112959662A (zh) * 2021-01-26 2021-06-15 深圳市创必得科技有限公司 Lcd光固化3d打印均光优化补偿装置及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090244329A1 (en) * 2006-08-11 2009-10-01 Nikkon Corporation Digital camera and image processing computer program product
CN209176181U (zh) * 2018-12-13 2019-07-30 苏州博理新材料科技有限公司 光固化3d打印机投影仪光照校正装置
CN113034382A (zh) * 2021-02-23 2021-06-25 深圳市创想三维科技有限公司 亮度均匀度调节方法、装置、计算机设备和可读存储介质
CN113103587A (zh) * 2021-04-16 2021-07-13 上海联泰科技股份有限公司 3d打印的控制方法、控制系统及3d打印设备
CN113469918A (zh) * 2021-07-22 2021-10-01 广州黑格智造信息科技有限公司 光学系统的曝光面校准方法、装置、计算机设备及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777999A (zh) * 2023-06-28 2023-09-19 深圳市度申科技有限公司 面阵相机多适应性高级平场校正方法
CN117132589A (zh) * 2023-10-23 2023-11-28 深圳明锐理想科技有限公司 一种条纹图校正方法、光学检测设备及存储介质
CN117132589B (zh) * 2023-10-23 2024-04-16 深圳明锐理想科技股份有限公司 一种条纹图校正方法、光学检测设备及存储介质
CN117252141A (zh) * 2023-11-13 2023-12-19 西安芯瑞微电子信息技术有限公司 一种流体力学求解器电路热仿真方法、装置及存储介质
CN117252141B (zh) * 2023-11-13 2024-01-30 西安芯瑞微电子信息技术有限公司 一种流体力学求解器电路热仿真方法、装置及存储介质
CN117793539A (zh) * 2024-02-26 2024-03-29 浙江双元科技股份有限公司 一种基于可变周期的图像获取方法及光学传感装置
CN117793539B (zh) * 2024-02-26 2024-05-10 浙江双元科技股份有限公司 一种基于可变周期的图像获取方法及光学传感装置

Also Published As

Publication number Publication date
US20240131793A1 (en) 2024-04-25
CN113469918A (zh) 2021-10-01
AU2022314858A1 (en) 2024-01-18
CN113469918B (zh) 2024-02-02
EP4343682A1 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
WO2023001306A1 (zh) 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质
CN102221409B (zh) 一种近红外标定模板的制备方法
CN107256689B (zh) Led显示屏亮度校正后的均匀性修复方法
JP2008113416A (ja) 表示の形状及び色の自動較正及び修正のためのシステム及び方法
US11060848B2 (en) Measuring device, system, method, and program
JP5412757B2 (ja) 光学系歪補正方法および光学系歪補正装置
CN110533618B (zh) 一种镜头畸变矫正的方法和照相装置
CN108489423B (zh) 一种产品表面水平倾斜角度的测量方法及系统
CN103377474A (zh) 镜头阴影校正系数确定方法、镜头阴影校正方法及装置
CN115265767A (zh) 照明场非均匀性检测系统的标定和校正方法及装置
CN110108230A (zh) 基于图像差分与lm迭代的二值光栅投影离焦程度评估方法
CN112929623B (zh) 一种校正过程中应用于整屏的镜头阴影修复方法及装置
CN113257181B (zh) Led屏校正图像采集方法、校正方法、采集装置及校正系统
CN108010071B (zh) 一种利用3d深度测量的亮度分布测量系统及方法
CN113870355A (zh) 一种相机的平场标定方法、装置及平场标定系统
CN101729739A (zh) 一种图像纠偏处理方法
CN110300291B (zh) 确定色彩值的装置和方法、数字相机、应用和计算机设备
CN109813533B (zh) 一种批量测试doe衍射效率和均匀性的方法及装置
Shafer Automation and calibration for robot vision systems
CN112381896A (zh) 一种显微图像的亮度校正方法及系统、计算机设备
CN114234846B (zh) 一种基于双响应曲线拟合的快速非线性补偿方法
CN114071099B (zh) 一种拖影测量方法、装置、电子设备和可读存储介质
Bedrich et al. Electroluminescence imaging of PV devices: Uncertainty due to optical and perspective distortion
CN117073578B (zh) 用于条纹投影轮廓术的主动投影非线性Gamma矫正方法
CN111754587B (zh) 一种基于单焦距聚焦拍摄图像的变焦镜头快速标定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22845469

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022845469

Country of ref document: EP

Ref document number: 2022314858

Country of ref document: AU

Ref document number: AU2022314858

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2023580988

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022314858

Country of ref document: AU

Date of ref document: 20220722

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022845469

Country of ref document: EP

Effective date: 20231222

NENP Non-entry into the national phase

Ref country code: DE