WO2023001306A1 - 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 - Google Patents
光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2023001306A1 WO2023001306A1 PCT/CN2022/107529 CN2022107529W WO2023001306A1 WO 2023001306 A1 WO2023001306 A1 WO 2023001306A1 CN 2022107529 W CN2022107529 W CN 2022107529W WO 2023001306 A1 WO2023001306 A1 WO 2023001306A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- optical system
- image
- value
- grayscale
- exposure surface
- Prior art date
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 255
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000000691 measurement method Methods 0.000 title claims abstract description 32
- 238000009826 distribution Methods 0.000 claims abstract description 100
- 238000010146 3D printing Methods 0.000 claims abstract description 39
- 238000003705 background correction Methods 0.000 claims abstract description 25
- 230000008569 process Effects 0.000 claims abstract description 19
- 238000003384 imaging method Methods 0.000 claims description 31
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 230000005855 radiation Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 21
- 238000010586 diagram Methods 0.000 claims description 18
- 238000013507 mapping Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000003068 static effect Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 description 21
- 238000007639 printing Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 239000004973 liquid crystal related substance Substances 0.000 description 5
- 241001270131 Agaricus moelleri Species 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 238000003854 Surface Print Methods 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000001268 conjugating effect Effects 0.000 description 1
- 238000001723 curing Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000000016 photochemical curing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
- B29C64/393—Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/20—Apparatus for additive manufacturing; Details thereof or accessories therefor
- B29C64/264—Arrangements for irradiation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
- B33Y50/02—Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/10—Processes of additive manufacturing
- B29C64/106—Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material
- B29C64/124—Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material using layers of liquid which are selectively solidified
- B29C64/129—Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material using layers of liquid which are selectively solidified characterised by the energy source therefor, e.g. by global irradiation combined with a mask
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/20—Apparatus for additive manufacturing; Details thereof or accessories therefor
- B29C64/264—Arrangements for irradiation
- B29C64/286—Optical filters, e.g. masks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y10/00—Processes of additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y30/00—Apparatus for additive manufacturing; Details thereof or accessories therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30144—Printing quality
Definitions
- the present application relates to the technical field of optical systems, in particular to an exposure surface calibration method, a calibration measurement method, a device, computer equipment and a storage medium of an optical system.
- the 3D printing technology of the related art realizes the surface printing process in the field of light curing, usually using DLP and LCD technology.
- DLP photo-curing printer uses high-resolution DLP devices and ultraviolet light sources to project the cross-section of the three-dimensional model on the workbench, so that the liquid photopolymer can be photo-cured layer by layer.
- LCD stereolithography printers are similar, except that an LCD display is used instead of a DLP projection system to display the cross section directly on the workbench.
- Both DLP and LCD belong to the printing surface exposure technology, and 3D printing requires that the irradiance of each part of the exposure surface be consistent. If the difference in irradiance within the exposure surface is too large, it will cause vertical lines on the surface of the printed product. In severe cases, the printing will fall off the board, resulting in printing failure.
- the commonly used light source calibration technology is to divide the entire format of the printing area into several measurement points, use an optical power meter to measure the irradiance of each point, and obtain the irradiance distribution data of each point. Then calculate the gray level compensation value of each point in reverse according to the data distribution, and use the gray level projection or display after the compensation value calculation in the area of each point position to achieve the uniformity calibration of the entire exposure surface.
- the point irradiance measurement after the format division can only explain the point irradiance, which belongs to discrete point values.
- discrete point values are used to replace the regional distribution values of the corresponding split planes, that is, discrete points are used to represent the uniform distribution in the plane, and the distribution values between discrete points are replaced by two points, resulting in a decrease in uniformity correction accuracy.
- the long-term use of the printing equipment the loss and replacement of components in the optical system will lead to changes in the gray compensation value after the previous calibration, and it is necessary to continue to correct the uniformity of the printing area, which consumes more manpower and time.
- the present application provides an exposure surface calibration method, device, computer equipment and storage medium of an optical system, aiming at improving the calibration accuracy and calibration efficiency of the exposure surface of the optical system.
- the present application provides a method for calibrating the exposure surface of an optical system, and the method for calibrating the exposure surface of the optical system may include:
- the digital mask is used to perform mask compensation on the projected light image emitted by the optical system, and a printed image with uniform irradiance value on the exposure surface is obtained.
- the present application provides a calibration measurement method for 3D printing.
- the calibration measurement method may include: using the above exposure surface calibration method of the optical system to calibrate the exposure surface of the optical system; An irradiance value and a second irradiance value corresponding to the black image; wherein, both the white image and the black image are projected by a calibrated optical system; according to the first irradiance value and the second irradiance value, Obtain the static contrast of the optical system; obtain the irradiance value of each area in the checkerboard diagram; wherein, the checkerboard diagram is obtained by projecting the calibrated optical system; use the ANSI contrast calculation method to process the irradiance value of each area, and obtain Dynamic contrast ratio of optical systems.
- the present application provides a method for calibrating an exposure surface of an optical system, and the method for calibrating an exposure surface of an optical system may include:
- the compensation parameters are used to perform mask compensation on the light projection image emitted by the optical system, and obtain the exposure Printed images with uniform irradiance values on all sides.
- an exposure surface calibration device for an optical system which may include:
- the image acquisition unit is configured to acquire the grayscale distribution image generated by the photographing module on the exposure surface of the optical system
- a fitting unit configured to divide the grayscale distribution image into a grid image comprising a plurality of segmented regions, and calculate the fitted grayscale value of each segmented region
- the selection unit is configured to select the minimum fitting gray value as the reference gray value among all the calculated fitting gray values, and calculate gray compensation coefficients corresponding to other segmented regions according to the reference gray value, so as to Generate a digital mask;
- the mask compensation unit is configured to use the digital mask to perform mask compensation on the projected light image emitted by the optical system, so as to obtain a printed image with a uniform irradiance value on the exposure surface.
- the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor.
- the processor executes the computer program, the above-mentioned method for calibrating the exposure surface of the optical system and The calibration measurement method described above for 3D printing.
- the present application provides a computer-readable storage medium, on which a computer program is stored.
- the computer program is executed by a processor, the above-mentioned method for calibrating the exposure surface of the optical system and the above-mentioned method for 3D printing are implemented. Calibration measurement method.
- the present application provides a computer program product, the computer program product includes a computer program, and when the computer program is executed by a processor, the above-mentioned exposure surface calibration method of the optical system and the above-mentioned calibration measurement method for 3D printing are implemented.
- the application provides an exposure surface calibration method, device, computer equipment, and storage medium of an optical system.
- the method includes: using a reference light source to perform flat-field correction on the shooting module; Generated grayscale distribution image; divide the grayscale distribution image into a grid image containing multiple segmented areas, and calculate the fitted grayscale value of each segmented area; select the smallest of all the fitted grayscale values obtained from the calculation Fit the grayscale value as the reference grayscale value, and calculate the grayscale compensation coefficients corresponding to other segmented regions according to the reference grayscale value to generate a digital mask; use the digital mask to mask the light projection image emitted by the optical system Compensation to obtain a printed image with uniform irradiance values on the exposed surface.
- This application can effectively improve the calibration accuracy and calibration efficiency of the exposure surface of the optical system by obtaining all pixel-level distributions of the irradiance values of the exposure surface, converting the discrete distribution with a lower density, and then performing corresponding compensation for the gray value.
- FIG. 1 is a schematic flowchart of a method for calibrating an exposure surface of an optical system provided by an embodiment of the present application
- FIG. 2 is a schematic flowchart of a method for calibrating an exposure surface of an optical system provided by another embodiment of the present application;
- FIG. 3 is a schematic sub-flow diagram of a method for calibrating an exposure surface of an optical system provided in an embodiment of the present application
- FIG. 4 is another schematic flowchart of a method for calibrating an exposure surface of an optical system provided in an embodiment of the present application
- FIG. 5 is an example schematic diagram of a method for calibrating an exposure surface of an optical system provided by an embodiment of the present application
- FIG. 6 is a schematic diagram of another example of a method for calibrating an exposure surface of an optical system provided by an embodiment of the present application.
- Fig. 7 is a schematic flow chart of a 3D printing calibration measurement method provided by the embodiment of the present application.
- FIG. 8 is a schematic block diagram of a calibration device for an exposure surface of an optical system provided by an embodiment of the present application.
- FIG. 9 is a sub-schematic block diagram of a calibration device for an exposure surface of an optical system provided by an embodiment of the present application.
- FIG. 10 is another schematic block diagram of a calibration device for an exposure surface of an optical system provided by an embodiment of the present application.
- Fig. 11 is a schematic block diagram of an optical system provided by an embodiment of the present application.
- FIG. 1 is a schematic flowchart of a method for calibrating an exposure surface of an optical system provided in FIG. 1 according to an embodiment of the present invention, which specifically includes steps S101 to S105.
- the fitted gray value may refer to the value obtained by mapping the gray distribution in the region; among all the calculated fitted gray values, a predetermined fitted gray value is selected as the reference gray value, wherein the predetermined
- the fitting gray value can be adjusted according to the practical experience of the optical system, and the predetermined fitting gray value can be any fitting gray value among all the calculated fitting gray values as required.
- the predetermined fitting grayscale value may be equal to the minimum fitting grayscale value ⁇ 50%+the maximum fitting grayscale value ⁇ 50%.
- the predetermined fitting grayscale value may be equal to the minimum fitting grayscale value ⁇ 75%+the maximum fitting grayscale value ⁇ 25%. It should be noted that, the fitting gray value may also be predetermined or other values may be used as the reference gray value.
- the minimum fitting gray value may be selected as the reference gray value.
- the time machine projects with the maximum brightness when performing light uniformity calibration.
- the reference gray value does not adopt the minimum fitting gray value, it is necessary to calibrate while reducing the brightness of the light machine, so that the gray value can also be increased at points below the reference value, so as to achieve the gray value of the exposure surface. calibration uniformity.
- FIG. 2 is a schematic flow chart of an exposure surface calibration method for an optical system provided by another embodiment of the present application. Specifically, it can be Including: steps S202-S205.
- the light guide film is used on the exposure surface to receive the image of the full-scale white image projected or displayed by the optical system, that is, the projected light image, and the 2 n -1 full-scale white image in the n-bit image is set to be used , taking an 8-bit image as an example, it can be set to use 255 full-scale white images in 0-255 gray scales.
- the imaging surface of the full white image can be seen from the back of the light guide film, which is the printing surface of the printer.
- the reference light source in this embodiment is a reference high-uniform surface light source in the flat-field correction technology, that is, before S202, a step is also included: S201, using the reference light source to perform flat-field correction on the camera module.
- the calibration accuracy and calibration efficiency for the exposure surface of the optical system can be effectively improved.
- the non-contact overall measurement is realized through the shooting module, avoiding multi-step repeated operations and saving measurement time.
- the calibration accuracy of this embodiment can also be adjusted according to the actual situation without adding additional operation steps.
- the irradiance value mentioned in this application can also be represented by another physical quantity, light intensity, and the two can be converted into each other.
- a method for calibrating an exposure surface of an optical system including:
- the compensation parameters are used to perform mask compensation on the light projection image emitted by the optical system, and obtain the exposure Printed images with uniform irradiance values on all sides.
- the image information may be gray scale, brightness and other similar physical quantities.
- the image information distribution image is a grayscale information distribution image or a brightness information distribution image, etc.
- the compensation parameter may be a compensation coefficient or a compensation value.
- the mapped image information value is obtained by mapping the gray distribution in the grid, which can be obtained by fitting or interpolation. Fitting algorithms include, but are not limited to, least squares, polynomial fitting, and cubic spline fitting.
- a digital mask can be generated through the compensation parameters, and the digital mask can be used to perform mask compensation on the projected image, so as to obtain a printed image with uniform irradiance value on the exposure surface.
- the step of acquiring the distribution image of the image information generated by the photographing module on the exposure surface of the optical system it further includes: using a reference light source to perform flat-field correction on the photographing module.
- step S201 may include:
- the reference light source projects an exposure surface with a uniform irradiance value according to the preset gray value
- the shooting module shoots the exposure surface of the reference light source to obtain the image of the reference light source
- the flat-field correction is performed on the shooting module according to the gray scale correction coefficient of each pixel unit.
- the gray value is set for the reference light source (ie, the reference high-uniform surface light source), so as to project an exposure surface with a uniform irradiance value.
- the reference light source is captured by the photographing module, so that each pixel unit in the photosensitive chip can receive light of the same energy at the same time.
- the reference light source is taken globally through the shooting module, that is, the uniform surface light source is larger than the range of the imaging surface of the shooting module, so that the photosensitive chip in the shooting module can be captured by the imaging objective lens
- the preset grayscale value may be 2 n ⁇ 1, and if the grayscale of an 8-bit image is used, the preset grayscale value is 255.
- the shooting module shoots a standard high-uniform surface light source
- the detected brightness of the center and edge of the photosensitive chip is different, and different images appear on the captured image. Displayed in grayscale.
- Flat-field correction is to compare the grayscale value sensed by the photosensitive chip with the known reference high-uniform surface light source. Taking the grayscale of an 8-bit image as an example, the grayscale value of the reference light source is set to 255, and the shot can be obtained.
- the gray scale compensation value of each pixel of the module After using the compensation value to correct the module, the true distribution of the uniformity of the light source can be obtained by shooting the reference high-uniformity surface light source, and the shooting module completes the flat-field correction.
- the shooting module shoots the exposure surface of the reference light source to obtain the image of the reference light source, including:
- the image capturing surface of the shooting module is divided into several image capturing sub-regions;
- a number of reference light source sub-images are spliced to obtain a reference light source image.
- the orientation plane of the camera module is divided according to the exposure size of the reference light source, and then the divided imaging sub-regions are projected by moving the reference light source to obtain corresponding reference light source sub-images.
- a surface light source with a smaller area and uniform height is used as the reference light source, and the length and width of the reference light source are set to be 1/p of the length of the image-taking surface and 1/q of the width, respectively, wherein the image-taking surface Conjugated with the imaging surface, that is, for the imaging surface divided into p*q regions, there are corresponding number of p*q regions mapped one by one on the imaging surface. Both p and q are integers.
- the p*q regions are spliced region by region into a uniform imaging surface, and the flat field correction is completed on the shooting module by using the uniform imaging surface.
- Figure 6 wherein Figure 6 is obtained by conjugating Figure 5, set the reference light source in area 1, and the corresponding imaging area is 1', move the reference light source to area 2, and the corresponding imaging area is 2'. Move the same reference light source area by area.
- the reference light source corresponds to imaging p*q areas. After splicing area by area, a uniform imaging surface covering the photosensitive chip can be generated on the imaging surface. Through this imaging surface, the camera module can be flat-field corrected.
- An optical system is an imaging system that can produce a clear image that is completely similar to an object.
- a beam in which all rays or their extensions intersect at the same point is called a concentric beam.
- the outgoing beam After the incident concentric beam is transformed into an optical system, the outgoing beam must also be a concentric beam.
- the intersection points of the incoming and outgoing concentric beams are called the object point and the image point, respectively.
- the ideal optical system has the following properties: 1. After all the rays intersecting the object point pass through the optical system, the outgoing rays all intersect at the image point. vice versa. The point at which the pair of objects and images are interchangeable is called the conjugate point. 2.
- Each straight line in the object space corresponds to a straight line in the image space called the conjugate line; the corresponding plane is called the conjugate plane. 3.
- the conjugate plane is still perpendicular to the optical axis. 4.
- the lateral magnification is constant.
- the method for calibrating the exposure surface of the optical system may further include: steps S301 - S303 .
- the irradiance of the optical system can also be monitored to know the aging state of the optical system.
- hardware parameters of the projected image are first set, and the hardware parameters may include the exposure of the 3D printer, the gain of the camera module, and image processing conditions.
- the exposure of the photographing module is fixed, and the fixed exposure keeps the gray scale peak value of the projected light image below 2 n ⁇ 1 (the gray scale of the 8-bit image is 255).
- the grayscale calibration of the projected light image is carried out to obtain the coordinate points corresponding to the irradiance value of the captured image grayscale.
- the irradiance value measuring device can be used to measure different irradiance values, so as to adjust the irradiance value of the projected light image, and then obtain the corresponding gray value through the shooting module.
- the coordinate points of the irradiance value can be fitted according to the adjustment results (that is, different irradiance values and corresponding gray values) to generate a relationship curve between the gray value and the irradiance value.
- the shooting module reads the gray scale of the projected light image, it can determine the corresponding irradiance value according to the relationship curve.
- the exposure surface calibration method of the optical system may also include:
- a second relationship between the radiation control parameters of the optical system and the radiation data is obtained; the second relationship is used for adjusting the radiation data during the 3D printing process.
- the irradiation control parameter is used to adjust irradiation data (such as irradiation brightness, optical power, etc.), for example, may be related parameters such as current, or may be input brightness, input voltage, input electric power, etc.
- Image information is used to reflect the characteristics of the image, which can be expressed in the form of a matrix, such as grayscale, brightness and other related parameters, where the grayscale can be the average grayscale value of all pixels in a certain area, or the average grayscale value of all pixels in a certain area median, total, etc.
- the irradiance data is used to characterize the electromagnetic radiation-related information of the light projection image, for example, parameters such as irradiance intensity, light intensity, illuminance, or optical power.
- the maximum light projection area of the reference light source is not smaller than the light projection area of the optical system, so as to ensure that each segmented area can find a position corresponding to the mapping relationship.
- the following methods can be used to obtain the radiation control parameters of the optical system and the corresponding image information: input each preset current value to the optical system, and obtain the optical system based on each The gray value of the projected light image emitted by the preset current value is used to obtain the gray value corresponding to each preset current value.
- Obtaining the first relationship between the image information of the optical system and the radiation data may include: obtaining the third relationship between the image information of the reference light source and the image information of the optical system, and the relationship between the image information of the reference light source and the radiation data of the reference light source.
- the fourth relationship between the illumination data; the first relationship is obtained based on at least the third relationship and the fourth relationship; wherein, the third relationship satisfies that the image information of the reference light source is consistent with or deviates from the image information of the optical system.
- the third relationship can be obtained by the following means: obtaining the fifth relationship between the grayscale of the optical system and the radiation data, and the sixth relationship between the grayscale of the reference light source and the radiation data; based on the fifth relationship and the sixth relationship , to obtain the third relationship between the gray scale of the reference light source and the gray scale of the optical system.
- the fourth relationship may be in the form of a mapping table or in the form of a fitting curve, which is not specifically limited here. Taking the image information as an example of the gray value, by adjusting the irradiance data of the exposure surface of the reference light source, and using the shooting module to obtain the corresponding gray value under each irradiance data, that is, the shooting module can be used to obtain any radiation value.
- the relationship between the gray value of the reference light source and the irradiation data can be generated. It should be noted that the above first relationship can be obtained not only based on the third relationship and the fourth relationship, but also can be further obtained based on the relationship between the initial irradiation control parameters of the optical system and the irradiation data. Furthermore, the above-mentioned first relationship can also be obtained based on the relationship between other parameters of the optical system.
- the image information of the reference light source and the image information of the optical system can be determined to be consistent. Therefore, based on the third relationship between the image information of the reference light source and the image information of the optical system, and the fourth relationship between the image information of the reference light source and the radiation data of the reference light source, the image information and radiation data of the optical system can be obtained.
- the image information of the reference light source deviates from the image information of the optical system.
- the deviation can be obtained in the following ways: obtain the relationship between the gray scale of the optical system and the radiation data, and the relationship between the gray scale of the reference light source and the radiation data; based on the above two relationships, the gray scale of the reference light source and the optical
- the first relationship can be obtained based on the seventh relationship, the third relationship, and the fourth relationship.
- the image information is gray scale and the radiation control parameter is current as an example for further description.
- the image information of the reference light source and the image information of the optical system can be determined to be consistent
- Each preset current value is input into the optical system, and the grayscale value of the light projection image emitted by the optical system based on each preset current value is obtained, and the grayscale value corresponding to each preset current value is obtained. Adjust the irradiance data of the exposure surface of the reference light source, and use the shooting module to obtain the corresponding gray value;
- the irradiation data may be one of irradiation intensity, power, light intensity and other related values.
- the shooting module can be used to obtain any irradiance data and the irradiance The gray value corresponding to the data, so that the eighth relationship between the gray value and the irradiation data can be generated.
- the eighth relationship may be in the form of a mapping table or in the form of a fitting curve, which is not specifically limited here. Any fitting algorithm in the field can be used to fit the irradiation data and the corresponding gray value, such as any one of least squares method, polynomial fitting algorithm and cubic spline fitting algorithm.
- Each preset current value is input to the optical system, and the preset current value can be determined according to the optical system.
- the current value sent to the optical system can be controlled by the host computer, and the magnitude of the current affects the irradiance data and grayscale of the light projection image.
- the optical system projects light projection images with different grayscale values under each preset current value.
- the camera module for grayscale reading, the light projection images projected by the optical system under each preset current value can be obtained gray value of .
- the irradiation data corresponding to each preset current value can be obtained.
- the ninth relationship may be in the form of a mapping table or in the form of a fitting curve, which is not specifically limited here.
- each preset current value and the irradiation data corresponding to each preset current value may be determined according to the form of the ninth relationship.
- the preset current value and the corresponding irradiation data may be fitted by a fitting algorithm to obtain a fitting curve relationship between the preset current value and the irradiation data.
- the ninth relationship can be used for 3D printing, and the irradiation data of the optical system is calibrated at this time.
- it before adjusting the irradiance data of the exposure surface of the reference light source, it further includes: limiting the exposure of the shooting module.
- the radiation data required for the image of each layer may be different. Therefore, during the printing process, the irradiation data needs to be changed according to the requirements.
- the irradiance data of the light projection image under the same current will change, so it is necessary to calibrate the irradiance data of the optical system.
- an optical power meter is often used to manually collect radiation data point by point.
- large-format printing it is often necessary to sample radiation data for a long time, which is not only inefficient but also requires a high-cost optical power meter.
- the optical system can be automatically and quickly calibrated without the need for an optical power meter, and labor costs are also saved.
- Each preset current value and the corresponding irradiation data are processed to obtain a thirteenth relationship between the current value of the optical system and the irradiation data; wherein, the thirteenth relationship is used for adjusting the irradiation data during the 3D printing process.
- the irradiation data may be one of irradiation intensity, power and other related values.
- the tenth relationship and the eleventh relationship can be pre-measured and stored in the storage unit, and can be called directly when needed, or can be obtained in a way such as the eighth relationship.
- the tenth relationship as the fitting curve relationship as an example, the gray value and irradiance data of the optical system can be measured and stored in the storage unit before the optical system leaves the factory, and the gray value and irradiance data can be calculated by the fitting algorithm Fitting is performed to obtain the above tenth relationship.
- the relationship between the grayscale value of the reference light source and the grayscale value of the optical system can be obtained.
- the optical system projects light projection images with different grayscale values under each preset current value.
- the light projection images projected by the optical system under each preset current value can be obtained gray value of .
- the optical system projects light projection images with different grayscale values under each preset current value, and the corresponding grayscale value in the optical system is obtained by using the camera module for grayscale reading.
- the compensated gray value in the reference light source can be obtained, and the irradiation data corresponding to each preset current value can be further determined based on the eleventh relationship.
- irradiation data corresponding to each preset current value can be obtained.
- the above-mentioned thirteenth relationship can be obtained.
- the tenth relationship, the eleventh relationship, the twelfth relationship and the thirteenth relationship can all be in the form of a mapping table or a relationship of a fitting curve.
- the sixth relationship can be used for 3D printing.
- the optical system that is, the radiation data. The above method solves the gray scale difference between the reference light source and the optical system under unified irradiation data by obtaining the grayscale relationship between the reference light source and the optical system, and can further improve the calibration accuracy of the irradiation data.
- steps S301 and S302 can be performed during flat field correction.
- step S202 may include:
- the imaging plane is photographed by the photographing module after flat-field correction, and the grayscale value of the imaging plane photographed as a whole is at the maximum grayscale (if the grayscale image displayed by the grayscale image The order is from 0 to 255, and the maximum gray value is below 255), the real uniformity distribution of the imaging surface can be obtained, that is, the corresponding pixel-level gray distribution image.
- step S202 may include:
- Obtain the grayscale distribution image generated by the shooting module on the exposure surface of the optical system including:
- the first grayscale distribution image is obtained by shooting the first image projected to the optical system;
- the second grayscale distribution image is a second image projected to the optical system Obtained by shooting;
- the first grayscale area in the first image corresponds to the second grayscale area in the second image;
- the second grayscale area in the first image corresponds to the first grayscale area in the second image;
- the first gray scale area and the second gray scale area correspond on the gray scale.
- the first gray scale area can be a white area (that is, 255 gray scales)
- the second gray scale area can be black area (i.e. 0 grayscale).
- the white area in the first image corresponds to the black area in the second image, that is, if any position in the first image is a white area, then the corresponding position in the second image is a black area.
- the black area in the first image corresponds to the white area in the second image, that is, if any position in the first image is a black area, then the corresponding position in the second image is a white area.
- the first image is captured by the photographing module to obtain a first grayscale distribution image
- the second image is photographed by the photographing module to obtain a second grayscale distribution image.
- the grayscale distribution image generated by photographing the exposure surface of the optical system can be obtained by superimposing the first grayscale distribution image and the second grayscale distribution image.
- the above method obtains the first grayscale distribution image and the second grayscale distribution image through two projections, which can reduce the grayscale difference between the center and the edge, and improve the accuracy of the grayscale value in the grayscale distribution image.
- the first grayscale area in the first image is spaced from the second grayscale area in the first image; the first grayscale area in the second image is spaced from the second grayscale area in the first image; the first The first grayscale area in the image is circular or square; the first grayscale area in the second image is circular or square, that is, the first image and the second image can be in a checkerboard shape or evenly distributed dot chart.
- the first image and the second image as dot diagrams as an example the first image may include several light projection areas, and the light projection areas may be squares of the same size. Each dot is set in the light projection area, and the diameter of the dot can be preset.
- the white dot does not exist in the adjacent area of any light projection area.
- the second image there is no white dot in any of the light projection areas, but there are white dots in the adjacent areas of the any light projection area.
- the first image and the second image are superimposed, and the obtained image is an image in which white dots exist in each light projection area.
- the lens of the shooting module is also provided with a filter for filtering the influence of ambient light on the shooting module.
- a fitting algorithm is used to calculate the fitting gray value of each segmented area, and the fitting algorithm is a least square method, a polynomial fitting algorithm or a cubic spline fitting algorithm.
- a fitting algorithm when using a fitting algorithm to calculate the fitting gray value of each segmented area, a least square method, a polynomial fitting algorithm, a cubic spline fitting algorithm, or other fitting algorithms may be used.
- the grayscale distribution image When the grayscale distribution image is divided into a grid image including a plurality of divided regions, it can be specifically divided into m rows and n columns to form an m*n grid distribution.
- step S204 may include: steps S401-S403.
- the minimum grayscale fitting value in the grid image is selected as a benchmark, and compared with other fitting grayscale values, the grayscale compensation coefficient can be obtained.
- the gray scale compensation coefficients in the grid image can form a digital mask, after which each projection image is compensated by the digital mask, and a printed image with uniform irradiance value on the exposure surface can be obtained.
- the fitting gray value of the first grid in the grid image is set to P 11 , and so on, the fitting gray value of the last grid is P mn , and the number of items is m*n. grayscale array.
- the gray value of the required image is the gray value of the image to be projected by the user. When the gray value of the required image is set to a, multiply a by the ratio matrix to obtain the digital mask.
- a calibration measurement method for 3D printing is also provided.
- the calibration measurement method also includes multiple measurement steps. By measuring multiple parameters of the optical system, it is possible to It is judged that the parameters of the optical system meet the requirements during the 3D printing process, so that a clear image that is completely similar to the object can be produced, thereby achieving more accurate and efficient 3D printing.
- FIG. 7 is a schematic flowchart of a calibration measurement method for 3D printing provided by an embodiment of the present application.
- the calibration measurement method for 3D printing may include: using some implementations according to the present invention
- the method 100 for calibrating the exposure surface of the optical system according to the method 100 calibrates the exposure surface of the optical system; and steps S701 to S705 of the calibration measurement method as follows.
- the exposure surface calibration method 100 of an optical system according to some embodiments of the present invention is described in detail with reference to FIG. 1 to FIG.
- the camera module takes a picture of the resolution test image displayed by the exposure system, uses the CTF and MTF image algorithms to calculate the value of the required spatial resolution, and determines the sharpness of the exposure system.
- the exposure lens with electric focus system it can also provide focus adjustment feedback.
- the camera module uses the above steps to extract the gray level distribution of the exposure surface, and according to the continuous distribution of gray levels, if there is a sudden change in the gray level of an area and is lower than the preset threshold , it is determined that the area is dirty;
- the camera module is calibrated at different heights. According to the principle of constant magnification at different object distances, the size and relative distribution of various images on the exposure surface can be tested.
- steps S701 to S705 are only exemplary, and these steps may be performed in different orders to achieve measurement of various parameters, which is not limited in the present application.
- measuring the static contrast and dynamic contrast of the optical system specifically includes the following steps: acquiring the first irradiance value corresponding to the white image and the second irradiance value corresponding to the black image; wherein, the white image Both the black and black images are projected by the calibrated optical system;
- the irradiance value of each area is processed by ANSI contrast calculation method to obtain the dynamic contrast of the optical system.
- the static contrast can be measured based on the exposure surface calibration technique according to the present application, which specifically includes the following steps. First, use the light guide film on the exposure surface to receive the image of the full-scale white image projected or displayed by the optical system, that is, the projected image, and set to use and set to use 2 n -1 full-scale white images in the n-bit image, Taking an 8-bit image as an example, you can set a 255 full-scale pure white image in the 0-255 gray scale. The irradiance value of the above-mentioned pure white image is measured by using the irradiance value measuring device of the optical system, so as to obtain the first irradiance value.
- the light guide film on the exposure surface uses the light guide film on the exposure surface to receive the image of the full-frame pure black image projected or displayed by the optical system, that is, the projected image, and set to use a 0-level pure black image.
- the irradiance value of the above-mentioned pure black image is measured by using the irradiance value measuring device of the optical system, so as to obtain the second irradiance value.
- the static contrast ratio of the optical system is calculated by the contrast measuring device according to the ratio of the first irradiance value to the second irradiance value.
- the dynamic contrast can be measured based on the exposure surface calibration technique according to the present application, which specifically includes the following steps. First, use the light guide film to receive the optical system to project or display the checkerboard pattern on the exposure surface, and use the irradiance value measuring device of the optical system to measure the irradiance value at each point of the above-mentioned checkerboard pattern in sequence, so that Get the irradiance value at each point in the checkerboard. Then, the value of the dynamic contrast of the optical system is calculated by using the ANSI contrast calculation method through the contrast measurement device. In this embodiment, the measurement of the irradiance value adopts the way of machine vision.
- the irradiance value is obtained based on the exposure surface calibration technology provided in this application, which specifically includes the following steps. Firstly, the grayscale calibration of the projected light image is carried out under a fixed exposure to obtain the coordinate points corresponding to the irradiance value of the image grayscale; after the output irradiance value of the irradiation equipment is changed, the irradiance value measuring device is used to Measure different irradiance values, and then use the imaging module to obtain the gray value of the image. After multiple measurements of different irradiance values, the coordinate points are fitted and generated according to the fitted coordinate points. A curve of grayscale/irradiance values. Therefore, under the same exposure, the imaging module reads the grayscale of the projected light image, and the grayscale value can be converted into the corresponding irradiance value.
- measuring the clarity of the projected image specifically includes the following steps: controlling the calibrated optical system to project an image to a preset position on the projected light surface; the image includes at least one line in the sagittal direction and at least one line in the meridional direction line; obtain the actual gray distribution curve of the projected image, and confirm the CTF value or MTF value corresponding to each preset position according to the actual gray distribution curve and the preset gray distribution curve; according to the CTF value corresponding to each preset position or MTF, which determines the sharpness of an optical system.
- the preset positions may include a center position and four corner positions of the light projection surface.
- CTF Contrast Transfer Function, contrast transfer function
- MTF Modulation Transfer Function, modulation transfer function
- determining the clarity of the optical system includes: if any CTF value is less than the first set value, or any MTF value is less than the second set value, Then it is determined that the clarity of the optical system is unqualified. The sharpness is judged according to the CTF value. If the calculated CTF value is less than the set value, it is judged that the point is not clear. The sharpness can also be judged according to the MTF value. If the calculated MTF value is less than the set value, it is judged that the point is not clear. If there are unclear points, it is considered that the clarity of the lens of the optomechanical device is poor.
- the distance between the lines in the sagittal direction may be N pixels, which is used to determine whether the meridian direction is blurred.
- the distance between the lines in the meridional direction can be N pixels, etc., and is used to determine whether the sagittal direction is blurred.
- the width of the line may be N pixels, where N is a positive integer.
- step S703 detecting whether the optical-mechanical equipment is dirty specifically includes: the value of any point on the actual grayscale distribution curve is lower than the lower limit value, and/or the actual grayscale distribution curve has a sudden curve, then determine the optical Dirt is present on the system.
- the optical system of the present application performs dirt detection through the following steps.
- the gray scale distribution of the optomechanical device generally changes continuously in a format. Dirt is considered present if the actual gray level is below the lower limit value and/or if there is a sudden change in the actual gray level distribution curve of the optomechanical device.
- measuring the size of the photographed object specifically includes: calibrating the size corresponding to each pixel on the photographing surface of the camera module, and determining the size of the photographed object according to the number of pixels occupied by the side length of the photographed object; and/or or,
- the dimension measuring device of the optical system of the present application performs dimension measurement by the following method.
- the size corresponding to each pixel on the shooting surface of the camera is pre-calibrated, but according to the number of pixels occupied by the side length of the shooting object in the camera, the size measuring device of the optical system is used to determine the size of the object.
- the size of the photographing surface of the camera is obtained, and the size of the object is determined using a dimension measuring device of the optical system according to the ratio of the side length of the photographed object to the photographing surface.
- the optical system of the present application needs to use different camera modules in different detection items.
- the optical system of the present application can select a corresponding camera module for different detection items and execute a corresponding detection process, thereby realizing automatic detection. That is, the current detection item is determined, and the above steps S701-S704 are executed after selecting the corresponding camera module according to the current detection item.
- FIG. 8 is a schematic block diagram of an exposure surface calibration device 800 of an optical system provided in this embodiment, and the device 800 includes:
- the image acquisition unit 802 is configured to acquire a grayscale distribution image generated by photographing the exposure surface of the optical system of the photographing module;
- the fitting unit 803 is configured to divide the grayscale distribution image into a grid image comprising a plurality of divided regions, and calculate the fitted grayscale value of each divided region;
- the selection unit 804 is configured to select the minimum fitting gray value from all the calculated fitting gray values as the reference gray value, and calculate gray compensation coefficients corresponding to other segmented regions according to the reference gray value, to generate a digital mask;
- the mask compensation unit 805 is configured to use a digital mask to perform mask compensation on the projected light image emitted by the optical system, so as to obtain a printed image with a uniform irradiance value on the exposure surface.
- the device 800 further includes a flat-field correction unit 801 configured to perform flat-field correction on the camera module by using a reference light source.
- the flat field correction unit 801 may include:
- the projection unit is configured for the reference light source to project an exposure surface with uniform irradiance value according to the preset gray value;
- the exposure surface photographing unit is configured to be used for the photographing module to photograph the exposure surface of the reference light source to obtain an image of the reference light source;
- the data acquisition unit is configured to acquire the grayscale output value of each pixel unit in the photosensitive chip according to the reference light source image, compare the preset grayscale value of the reference light source with the grayscale output value of each pixel unit, and obtain the grayscale output value of each pixel unit
- the gamma correction coefficient of the unit
- the coefficient correction unit is configured to perform flat-field correction on the shooting module according to the gray scale correction coefficient of each pixel unit.
- the exposure surface shooting unit may include:
- the orientation surface division unit is configured to divide the image-taking surface of the shooting module into several image-taking sub-regions based on the size of the exposure surface of the reference light source;
- the mobile projection unit is configured to move the reference light source, and the reference light source respectively projects on each imaging sub-area to obtain several reference light source sub-images corresponding to each imaging sub-area;
- the image stitching unit is configured to stitch several reference light source sub-images to obtain a reference light source image.
- the exposure surface calibration device 800 of the optical system further includes:
- a limiting unit 901 configured to limit the exposure of the camera module
- the curve generation unit 902 is configured to adjust the irradiance value of the exposure surface of the reference light source, and use the shooting module to obtain the corresponding gray value, so as to fit and generate the relationship curve between the gray value and the irradiance value;
- the grayscale reading unit 903 is configured to use the photographing module to read the grayscale of the projected light image emitted by the optical system based on the relationship curve to obtain the corresponding irradiance value.
- the image acquisition unit 802 may include:
- the adjusting unit is configured to adjust the exposure of the shooting module so that the captured grayscale distribution image is below the maximum grayscale.
- a fitting algorithm is used to calculate the fitting gray value of each segmented area, and the fitting algorithm is a least square method, a polynomial fitting algorithm or a cubic spline fitting algorithm.
- the selecting unit 804 may include:
- the area labeling unit 1001 is configured to label all the fitted gray values as P 11 , P 12 , ..., P mn in sequence according to the order of the corresponding segmented areas, and obtain the number of items m*n
- the grayscale array
- the calculation unit 1002 is configured to select the minimum value P min in the grayscale module as the minimum fitting grayscale value, and calculate the normalized ratio between the minimum fitting grayscale value and other data in the grayscale array , get a ratio matrix;
- the multiplication unit 1003 is configured to use the ratio contained in the ratio matrix as the corresponding grayscale compensation coefficient, and then multiply the grayscale value of the preset image by each ratio in the ratio matrix to obtain a corresponding digital mask.
- the optical system of the present application is an imaging system capable of producing a clear image exactly like an object.
- the optical system of the present application may also include one or more additional devices to perform multiple measurement steps. By measuring multiple parameters of the optical system, it can be judged that each parameter of the optical system is not stable in 3D printing. The process is in line with the requirements, making it possible to produce a clear image that is completely similar to the object, thereby achieving more accurate and efficient 3D printing.
- FIG. 11 is a schematic block diagram of an optical system provided by an embodiment of the present application.
- the optical system 1100 also includes:
- a contrast measuring device 1101 configured to measure the static contrast and dynamic contrast of the optical system
- sharpness measurement device 1102 configured to measure the sharpness of the projected image
- the dirt measuring device 1103 is configured to detect whether there is dirt on the optical mechanical equipment
- the size measuring device 1104 is configured to measure the size of the photographed object.
- the optical system of the present application can measure the static contrast and dynamic contrast of the optical system through the contrast measurement device 1101 .
- the contrast measurement device 1101 may be configured to obtain a first irradiance value corresponding to a white image and a second irradiance value corresponding to a black image; wherein, both the white image and the black image are projected by a calibrated optical system ; According to the first irradiance value and the second irradiance value, the static contrast of the optical system is obtained; the irradiance value of each area in the checkerboard diagram is obtained; wherein, the checkerboard diagram is obtained by projecting the calibrated optical system ; Utilize the ANSI contrast calculation method to process the irradiance value of each area to obtain the dynamic contrast of the optical system.
- the optical system of the present application further includes a sharpness measurement device 1102, and the sharpness measurement device 1102 is used to measure the sharpness of the optical system.
- the sharpness measurement device 1102 may be configured to control the calibrated optical system to project an image to a preset position on the light projection plane; the image includes at least one line in the sagittal direction and at least one line in the meridional direction; acquire the projected image The actual gray distribution curve, and confirm the CTF value corresponding to each preset position according to the actual gray distribution curve and the preset gray distribution curve; determine the clarity of the optical system according to the CTF value corresponding to each preset position. It is also used to determine that the clarity of the optical system is unqualified if any CTF value is smaller than the set value.
- the optical system of the present application further includes a dirt measuring device 1103, which can detect whether there is dirt in the optical mechanical equipment.
- the dirt measuring device 1103 is configured to determine that there is dirt on the optical system when the value of any point on the actual gray scale distribution curve is lower than the lower limit value, and/or the actual gray scale distribution curve has a sudden curve.
- the optical system of the present application further includes a size measuring device 1104, which is used to measure the size of the object, so as to realize more accurate 3D printing.
- the size measuring device 1104 is configured to calibrate the size corresponding to each pixel on the shooting surface of the camera module, and determine the size of the shooting object according to the number of pixels occupied by the side length of the shooting object; and/or, obtain the camera
- the size of the shooting surface in the module is determined according to the ratio of the side length of the shooting object to the shooting surface to determine the size of the shooting object.
- the optical system of the present application uses the contrast measurement device 1101 , the sharpness measurement device 1102 , the dirt measurement device 1103 and/or the size measurement device 1104 to perform different detection items.
- Different camera modules need to be used in different inspection projects.
- the optical system of the present application can select a corresponding camera module for different detection items and execute a corresponding detection process, thereby realizing automatic detection.
- the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed, the steps provided in the above-mentioned embodiments can be realized.
- the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- the embodiment of the present application also provides a computer device, which may include a memory and a processor.
- a computer program is stored in the memory.
- the processor invokes the computer program in the memory, the steps provided in the above embodiments can be implemented.
- the computer equipment may also include components such as various network interfaces and power supplies.
- the optical system of the present application is preferably the optical system of a 3D printer, including DLP (Digital Light Processing, digital light processing) optical-mechanical equipment, or optical-mechanical equipment comprising LCD (Liquid Crystal Display, liquid crystal display), or comprising LCOS (Liquid Crystal On Silicon, liquid crystal on silicon) optomechanical equipment, or optomechanical equipment including OLED (Organic Light-Emitting Diode, organic light-emitting diode), Micro-LED, Mini-LED, liquid crystal projection, etc.
- DLP Digital Light Processing, digital light processing
- optical-mechanical equipment or optical-mechanical equipment comprising LCD (Liquid Crystal Display, liquid crystal display), or comprising LCOS (Liquid Crystal On Silicon, liquid crystal on silicon) optomechanical equipment, or optomechanical equipment including OLED (Organic Light-Emitting Diode, organic light-emitting diode), Micro-LED, Mini-LED, liquid crystal projection, etc.
- DLP Digital Light Processing, digital light processing
- LCD Liquid Crystal Display,
- the application provides an exposure surface calibration method, device, computer equipment, and storage medium of an optical system.
- the method includes: using a reference light source to perform flat-field correction on the shooting module; Generated grayscale distribution image; divide the grayscale distribution image into a grid image containing multiple segmented areas, and calculate the fitted grayscale value of each segmented area; select the smallest of all the fitted grayscale values obtained from the calculation Fit the grayscale value as the reference grayscale value, and calculate the grayscale compensation coefficients corresponding to other segmented regions according to the reference grayscale value to generate a digital mask; use the digital mask to mask the light projection image emitted by the optical system Compensation to obtain a printed image with uniform irradiance values on the exposed surface.
- the calibration accuracy and calibration efficiency of the exposure surface of the optical system can be improved.
- the present application also discloses a calibration measurement method for 3D printing.
- the calibration measurement method for 3D printing can judge that the parameters of the optical system meet the requirements during the 3D printing process by measuring multiple parameters of the optical system, so that a clear image that is completely similar to the object can be produced, thereby Realize more accurate and efficient 3D printing.
- the exposure surface calibration method, calibration measurement method, device, computer equipment and storage medium of the optical system of the present application are reproducible and can be used in various industrial applications.
- the exposure surface calibration method, calibration measurement method, device, computer equipment and storage medium of the optical system of the present application can be used in the technical field of optical systems.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Chemical & Material Sciences (AREA)
- Materials Engineering (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Manufacturing & Machinery (AREA)
- Mechanical Engineering (AREA)
- Optics & Photonics (AREA)
- Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Exposure And Positioning Against Photoresist Photosensitive Materials (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (27)
- 一种光学系统的曝光面校准方法,其特征在于,包括:利用基准光源对拍摄模组进行平场校正;获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像;将所述灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,并根据所述基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;利用所述数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
- 根据权利要求1所述的光学系统的曝光面校准方法,其特征在于,在计算得到的所有拟合灰度值中选取预定拟合灰度值作为基准灰度值,包括:在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值。
- 根据权利要求2所述的光学系统的曝光面校准方法,其特征在于,获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像,包括:获取第一灰度分布图像和第二灰度分布图像;其中,所述第一灰度分布图像为对所述光学系统投射的第一图像拍摄得到;所述第二灰度分布图像为对所述光学系统投射的第二图像拍摄得到;所述第一图像中的第一灰阶区域对应于所述第二图像中的第二灰阶区域;所述第一图像中的第二灰阶区域对应于所述第二图像中的第一灰阶区域;处理所述第一灰度分布图像和所述第二灰度分布图像,得到所述灰度分布图像。
- 根据权利要求3所述的光学系统的曝光面校准方法,其特征在于,所述第一图像中的第一灰阶区域与所述第一图像中的第二灰阶区域间隔设置;所述第二图像中的第一灰阶区域与所述第一图像中的第二灰阶区域间隔设置;所述第一图像中的第一灰阶区域为圆形或方形;所述第二图像中的第一灰阶区域为圆形或方形。
- 根据权利要求4所述的光学系统的曝光面校准方法,其特征在于,所述第一灰阶区域为白色区域;所述第二灰阶区域为黑色区域。
- 根据权利要求2所述的光学系统的曝光面校准方法,其特征在于,所述拍摄模组的镜头上设置有滤光片。
- 根据权利要求1至6任一项所述的光学系统的曝光面校准方法,其特征在于,所述利用基准光源对拍摄模组进行平场校正,包括:基准光源投影出具备均匀辐照度值的曝光面;拍摄模组对基准光源的曝光面拍摄,得到基准光源图像;根据所述基准光源图像,获取感光芯片中各像素单元的灰度输出值,将基准光源的预设灰度值和各像素单元的灰度输出值对比,得到各像素单元的灰度校正系数;根据各像素单元的灰度校正系数对拍摄模组进行平场校正。
- 根据权利要求7所述的光学系统的曝光面校准方法,其特征在于,所拍摄模组对基准光源的曝光面拍摄,得到基准光源图像,包括:基于基准光源的曝光面尺寸,将拍摄模组的取像面划分为若干个取像子区域;移动基准光源,基准光源分别对各取像子区域投影,得到若干个对应于各取像子区域的基准光源子图像;将所述若干个基准光源子图像拼接,得到所述基准光源图像。
- 根据权利要求1至8中任一项所述的光学系统的曝光面校准方法,其特征在于,还包括:限定所述拍摄模组的曝光量;调节基准光源曝光面的辐照度值,并利用拍摄模组获取对应的灰度值,以此拟合生成灰度与辐照度值的关系曲线;基于所述关系曲线,利用拍摄模组对光学系统发出的投光图像进行灰度读取,得到对应的辐照度值。
- 根据权利要求1至8中任一项所述的光学系统的曝光面校准方法,其特征在于,还包括:获取光学系统的辐照控制参数及对应的图像信息;获取所述光学系统的图像信息与辐照数据之间的第一关系;基于所述第一关系、所述辐照控制参数及所述对应的图像信息,得到所述光学系统的辐照控制参数和辐照数据的第二关系;所述第二关系用于3D打印过程中辐照数据的调节。
- 根据权利要求10所述的光学系统的曝光面校准方法,其特征在于,获取所述光学系统的图像信息与辐照数据之间的第一关系,包括:获取所述基准光源的图像信息与所述光学系统的图像信息之间的第三关系,以及所述基准光源的图像信息与所述基准光源的辐照数据之间的第四关系;至少基于所述第三关系、第四关系,得到所述第一关系;其中,所述第三关系满足所述基准光源的图像信息与所述光学系统的图像信息一致或存在偏差。
- 根据权利要求1至11中任一项所述的光学系统的曝光面校准方法,其特征在于,所述获取由拍摄模组对光学系统的曝光面拍摄生成的灰度分布图像,包括:调节拍摄模组的曝光量,使拍摄得到的灰度分布图像在灰度最大值以下。
- 根据权利要求1至10中任一项所述的光学系统的曝光面校准方法,其特征在于,利用拟合算法计算每一分割区域的拟合灰度值,所述拟合算法为最小二乘法、多项式拟合算法或者三次样条拟合算法。
- 一种用于3D打印的校准测量方法,其特征在于,所述校准测量方法包括:采用如权利要求1至14任一项所述的光学系统的曝光面校准方法对光学系统进行校准;获取白色图像对应的第一辐照度值以及黑色图像对应的第二辐照度值;其中,所述白色图像和所述黑色图像均为经校准后的光学系统投射得到;根据所述第一辐照度值和所述第二辐照度值,得到所述光学系统的静态对比度;获取棋盘格图中各区域的辐照度值;其中,所述棋盘格图为经校准后的光学系统投射得到;利用ANSI对比度计算方法处理所述各区域的辐照度值,得到所述光学系统的动态对比度。
- 根据权利要求15所述的用于3D打印的校准测量方法,其特征在于,所述校准测量方法还包括:控制经校准后的光学系统向投光幅面上的预设位置投射图像;所述图像包括至少一条弧矢方向的线条和至少一条子午方向的线条;获取投射图像的实际灰度分布曲线,并根据所述实际灰度分布曲线和预设灰度分布曲线确认各所述预设位置对应的CTF值或MTF值;根据各所述预设位置对应的CTF值或MTF值,确定光学系统的清晰度。
- 根据权利要求16所述的用于3D打印的校准测量方法,其特征在于,根据各所述预设位置对应的CTF值或MTF值,确定光学系统的清晰度,包括:若存在任一所述CTF值小于第一设定值,或存在任一所述MTF值小于第二设定值,则确定所述光学系统的清晰度不合格。
- 根据权利要求15或16所述的用于3D打印的校准测量方法,其特征在于,所述校准测量方法还包括:在所述实际灰度分布曲线上任意点的值低于下限值,和/或所述实际灰度分布曲线存在突变曲线,则确定所述光学系统上存在脏污。
- 根据权利要求15至18中任一项所述的用于3D打印的校准测量方法,其特征在于,所述校准测量方法还包括:标定到相机模组中拍摄面上的每个像素对应的大小,并根据拍摄物体的边长所占像素的数量确定拍摄物体的尺寸;和/或,获取所述相机模组中拍摄面的尺寸,根据拍摄物体的边长所占拍摄面的比例,确定拍摄物体的尺寸。
- 根据权利要求15至19中任一项所述的用于3D打印的校准测量方法,其特征在于,还包括:确定当前的检测项目,并根据所述当前的检测项目选定对应的相机模组。
- 一种光学系统的曝光面校准方法,其特征在于,包括:获取由拍摄模组对光学系统的曝光面拍摄生成的图像信息分布图像;将所述图像信息分布图像进行分割处理,并计算每一分割区域的映射图像信息值;从各所述映射图像信息值中选取基准映射图像信息值,并根据所述基准映射图像信息值计算其他分割区域对应的补偿参数;所述补偿参数用于对光学系统发出的投光图像进行掩膜补偿,并得到曝光面具备均匀辐照度值的打印图像。
- 一种光学系统的曝光面校准装置,其特征在于,包括:图像获取单元,被配置成用于获取由拍摄模组光学系统的曝光面拍摄生成的灰度分布图像;拟合单元,被配置成用于将所述灰度分布图像分割为包含多个分割区域的网格图像,并计算每一分割区域的拟合灰度值;选取单元,被配置成用于在计算得到的所有拟合灰度值中选取预定拟合灰度值为基准灰度值,并根据所述基准灰度值计算其他分割区域对应的灰度补偿系数,以生成得到数字掩膜;掩膜补偿单元,被配置成用于利用所述数字掩膜对光学系统发出的投光图像进行掩膜补偿,得到曝光面具备均匀辐照度值的打印图像。
- 根据权利要求22所述的光学系统的曝光面校准装置,其特征在于,所述选取单元还配置成用于在计算得到的所有拟合灰度值中选取最小拟合灰度值作为基准灰度值。
- 根据权利要求22或23所述的光学系统的曝光面校准装置,其特征在于,还包括:平场校正单元,被配置成用于利用基准光源对拍摄模组进行平场校正。
- 一种计算机设备,其特征在于,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现根据权利要求1至14、21中任一项所述的光学系统的曝光面校准方法以及根据权利要求15至20中任一项所述的用于3D打印的校准测量方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至14、21中任一项所述的光学系统的曝光面校准方法以及根据权利要求15至20中任一项所述的用于3D打印的校准测量方法。
- 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至14、21中任一项所述的方法以及根据权利要求15至20中任一项所述的用于3D打印的校准测量方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22845469.0A EP4343682A1 (en) | 2021-07-22 | 2022-07-22 | Exposure surface calibration method and apparatus for optical system, calibration measurement method and apparatus, computer device, and storage medium |
AU2022314858A AU2022314858A1 (en) | 2021-07-22 | 2022-07-22 | Method and Apparatus for Calibrating Exposure Surface of Optical System, Calibration Measurement Method and Apparatus, and Computer Device and Storage Medium |
US18/393,477 US20240131793A1 (en) | 2021-07-22 | 2023-12-21 | Method and Apparatus for Calibrating Exposure Surface of Optical System, Calibration Measurement Method and Apparatus, and Computer Device and Storage Medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110832043.6A CN113469918B (zh) | 2021-07-22 | 2021-07-22 | 光学系统的曝光面校准方法、装置、计算机设备及存储介质 |
CN202110832043.6 | 2021-07-22 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/393,477 Continuation-In-Part US20240131793A1 (en) | 2021-07-22 | 2023-12-21 | Method and Apparatus for Calibrating Exposure Surface of Optical System, Calibration Measurement Method and Apparatus, and Computer Device and Storage Medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023001306A1 true WO2023001306A1 (zh) | 2023-01-26 |
Family
ID=77881983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/107529 WO2023001306A1 (zh) | 2021-07-22 | 2022-07-22 | 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240131793A1 (zh) |
EP (1) | EP4343682A1 (zh) |
CN (1) | CN113469918B (zh) |
AU (1) | AU2022314858A1 (zh) |
WO (1) | WO2023001306A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777999A (zh) * | 2023-06-28 | 2023-09-19 | 深圳市度申科技有限公司 | 面阵相机多适应性高级平场校正方法 |
CN117132589A (zh) * | 2023-10-23 | 2023-11-28 | 深圳明锐理想科技有限公司 | 一种条纹图校正方法、光学检测设备及存储介质 |
CN117252141A (zh) * | 2023-11-13 | 2023-12-19 | 西安芯瑞微电子信息技术有限公司 | 一种流体力学求解器电路热仿真方法、装置及存储介质 |
CN117793539A (zh) * | 2024-02-26 | 2024-03-29 | 浙江双元科技股份有限公司 | 一种基于可变周期的图像获取方法及光学传感装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469918B (zh) * | 2021-07-22 | 2024-02-02 | 广州黑格智造信息科技有限公司 | 光学系统的曝光面校准方法、装置、计算机设备及存储介质 |
CN114281274A (zh) * | 2021-11-30 | 2022-04-05 | 深圳市纵维立方科技有限公司 | 光亮度均匀性的调节方法、打印方法、打印系统及设备 |
CN114559653B (zh) * | 2022-01-07 | 2024-01-19 | 宁波智造数字科技有限公司 | 利用立方体矩阵的光固化3d打印均匀度调整方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090244329A1 (en) * | 2006-08-11 | 2009-10-01 | Nikkon Corporation | Digital camera and image processing computer program product |
CN209176181U (zh) * | 2018-12-13 | 2019-07-30 | 苏州博理新材料科技有限公司 | 光固化3d打印机投影仪光照校正装置 |
CN113034382A (zh) * | 2021-02-23 | 2021-06-25 | 深圳市创想三维科技有限公司 | 亮度均匀度调节方法、装置、计算机设备和可读存储介质 |
CN113103587A (zh) * | 2021-04-16 | 2021-07-13 | 上海联泰科技股份有限公司 | 3d打印的控制方法、控制系统及3d打印设备 |
CN113469918A (zh) * | 2021-07-22 | 2021-10-01 | 广州黑格智造信息科技有限公司 | 光学系统的曝光面校准方法、装置、计算机设备及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105137720A (zh) * | 2015-09-18 | 2015-12-09 | 中国科学院光电技术研究所 | 基于数字微镜阵列制作不同深度的多台阶光栅的无掩模光刻机 |
CN106127842B (zh) * | 2016-06-15 | 2018-11-02 | 北京工业大学 | 一种结合光源分布与反射特性的面曝光3d打印的方法及系统 |
CN106228598B (zh) * | 2016-07-25 | 2018-11-13 | 北京工业大学 | 一种面向面曝光3d打印的模型自适应光照均匀化方法 |
CN112848301B (zh) * | 2021-01-26 | 2024-02-23 | 深圳市创必得科技有限公司 | Lcd光固化3d打印均光优化补偿方法与装置 |
CN112959662A (zh) * | 2021-01-26 | 2021-06-15 | 深圳市创必得科技有限公司 | Lcd光固化3d打印均光优化补偿装置及方法 |
-
2021
- 2021-07-22 CN CN202110832043.6A patent/CN113469918B/zh active Active
-
2022
- 2022-07-22 EP EP22845469.0A patent/EP4343682A1/en active Pending
- 2022-07-22 WO PCT/CN2022/107529 patent/WO2023001306A1/zh active Application Filing
- 2022-07-22 AU AU2022314858A patent/AU2022314858A1/en active Pending
-
2023
- 2023-12-21 US US18/393,477 patent/US20240131793A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090244329A1 (en) * | 2006-08-11 | 2009-10-01 | Nikkon Corporation | Digital camera and image processing computer program product |
CN209176181U (zh) * | 2018-12-13 | 2019-07-30 | 苏州博理新材料科技有限公司 | 光固化3d打印机投影仪光照校正装置 |
CN113034382A (zh) * | 2021-02-23 | 2021-06-25 | 深圳市创想三维科技有限公司 | 亮度均匀度调节方法、装置、计算机设备和可读存储介质 |
CN113103587A (zh) * | 2021-04-16 | 2021-07-13 | 上海联泰科技股份有限公司 | 3d打印的控制方法、控制系统及3d打印设备 |
CN113469918A (zh) * | 2021-07-22 | 2021-10-01 | 广州黑格智造信息科技有限公司 | 光学系统的曝光面校准方法、装置、计算机设备及存储介质 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777999A (zh) * | 2023-06-28 | 2023-09-19 | 深圳市度申科技有限公司 | 面阵相机多适应性高级平场校正方法 |
CN117132589A (zh) * | 2023-10-23 | 2023-11-28 | 深圳明锐理想科技有限公司 | 一种条纹图校正方法、光学检测设备及存储介质 |
CN117132589B (zh) * | 2023-10-23 | 2024-04-16 | 深圳明锐理想科技股份有限公司 | 一种条纹图校正方法、光学检测设备及存储介质 |
CN117252141A (zh) * | 2023-11-13 | 2023-12-19 | 西安芯瑞微电子信息技术有限公司 | 一种流体力学求解器电路热仿真方法、装置及存储介质 |
CN117252141B (zh) * | 2023-11-13 | 2024-01-30 | 西安芯瑞微电子信息技术有限公司 | 一种流体力学求解器电路热仿真方法、装置及存储介质 |
CN117793539A (zh) * | 2024-02-26 | 2024-03-29 | 浙江双元科技股份有限公司 | 一种基于可变周期的图像获取方法及光学传感装置 |
CN117793539B (zh) * | 2024-02-26 | 2024-05-10 | 浙江双元科技股份有限公司 | 一种基于可变周期的图像获取方法及光学传感装置 |
Also Published As
Publication number | Publication date |
---|---|
US20240131793A1 (en) | 2024-04-25 |
CN113469918A (zh) | 2021-10-01 |
AU2022314858A1 (en) | 2024-01-18 |
CN113469918B (zh) | 2024-02-02 |
EP4343682A1 (en) | 2024-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023001306A1 (zh) | 光学系统的曝光面校准方法、校准测量方法、装置、计算机设备及存储介质 | |
CN102221409B (zh) | 一种近红外标定模板的制备方法 | |
CN107256689B (zh) | Led显示屏亮度校正后的均匀性修复方法 | |
JP2008113416A (ja) | 表示の形状及び色の自動較正及び修正のためのシステム及び方法 | |
US11060848B2 (en) | Measuring device, system, method, and program | |
JP5412757B2 (ja) | 光学系歪補正方法および光学系歪補正装置 | |
CN110533618B (zh) | 一种镜头畸变矫正的方法和照相装置 | |
CN108489423B (zh) | 一种产品表面水平倾斜角度的测量方法及系统 | |
CN103377474A (zh) | 镜头阴影校正系数确定方法、镜头阴影校正方法及装置 | |
CN115265767A (zh) | 照明场非均匀性检测系统的标定和校正方法及装置 | |
CN110108230A (zh) | 基于图像差分与lm迭代的二值光栅投影离焦程度评估方法 | |
CN112929623B (zh) | 一种校正过程中应用于整屏的镜头阴影修复方法及装置 | |
CN113257181B (zh) | Led屏校正图像采集方法、校正方法、采集装置及校正系统 | |
CN108010071B (zh) | 一种利用3d深度测量的亮度分布测量系统及方法 | |
CN113870355A (zh) | 一种相机的平场标定方法、装置及平场标定系统 | |
CN101729739A (zh) | 一种图像纠偏处理方法 | |
CN110300291B (zh) | 确定色彩值的装置和方法、数字相机、应用和计算机设备 | |
CN109813533B (zh) | 一种批量测试doe衍射效率和均匀性的方法及装置 | |
Shafer | Automation and calibration for robot vision systems | |
CN112381896A (zh) | 一种显微图像的亮度校正方法及系统、计算机设备 | |
CN114234846B (zh) | 一种基于双响应曲线拟合的快速非线性补偿方法 | |
CN114071099B (zh) | 一种拖影测量方法、装置、电子设备和可读存储介质 | |
Bedrich et al. | Electroluminescence imaging of PV devices: Uncertainty due to optical and perspective distortion | |
CN117073578B (zh) | 用于条纹投影轮廓术的主动投影非线性Gamma矫正方法 | |
CN111754587B (zh) | 一种基于单焦距聚焦拍摄图像的变焦镜头快速标定方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22845469 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022845469 Country of ref document: EP Ref document number: 2022314858 Country of ref document: AU Ref document number: AU2022314858 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2023580988 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022314858 Country of ref document: AU Date of ref document: 20220722 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022845469 Country of ref document: EP Effective date: 20231222 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |