CN114741812A - Aspheric lens design method based on differential rendering - Google Patents

Aspheric lens design method based on differential rendering Download PDF

Info

Publication number
CN114741812A
CN114741812A CN202210444779.0A CN202210444779A CN114741812A CN 114741812 A CN114741812 A CN 114741812A CN 202210444779 A CN202210444779 A CN 202210444779A CN 114741812 A CN114741812 A CN 114741812A
Authority
CN
China
Prior art keywords
aspheric lens
generated image
rendering
lens
aspheric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210444779.0A
Other languages
Chinese (zh)
Inventor
岳涛
黄志炜
胡雪梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202210444779.0A priority Critical patent/CN114741812A/en
Publication of CN114741812A publication Critical patent/CN114741812A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

The invention discloses a design method of an aspheric lens based on differential rendering. The method comprises the following specific steps: (1) modeling the aspheric lens by using a spherical equation containing a correction factor, and then obtaining a corresponding three-dimensional model by using a Poisson surface reconstruction algorithm; (2) loading the three-dimensional model corresponding to the aspheric lens into a ray tracing-based differentiable rendering system, and rendering a generated image of a preset scene after passing through the aspheric lens; (3) perfecting an internal calculation chart of the differentiable rendering system, and establishing a mapping relation between a generated image and an aspheric lens design parameter; (4) and calculating loss functions of the generated image and the reference image, and optimizing design parameters of the aspheric lens by a gradient descent method. The method is based on the idea of ray tracing and gradient optimization, and the design method of the aspheric lens which does not need to depend on paraxial optics and has strong expansibility is realized.

Description

Aspheric lens design method based on differential rendering
Technical Field
The invention relates to the field of computational photography and computer graphics, in particular to a design method of an aspheric lens based on differential rendering.
Background
In recent years, aspherical lenses have been widely used in products such as camera lenses, spectacles, and optical read/write heads. The most significant advantage of an aspheric lens over a spherical lens is that the spherical aberration introduced by a spherical lens in the collimating and focusing system can be corrected. By adjusting the surface constant and the aspheric coefficient, the aspheric lens can eliminate spherical aberration to the maximum.
The existing aspheric lens design method is generally based on optical design software such as ZEMAX and Code V to optimize point spread functions corresponding to different areas or depths. The design method emphasizes direct optimization of the shape of the point spread function, and omits the application scene and the imaging quality of the lens.
Disclosure of Invention
In view of the above drawbacks of the conventional aspheric lens design method, the present invention provides a design method for an aspheric lens based on differential rendering.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a design method of an aspheric lens based on differential rendering comprises the following steps:
step 1, modeling an aspheric lens by using a spherical equation containing a correction factor, and then obtaining a corresponding three-dimensional model by using a Poisson surface reconstruction algorithm, wherein the spherical equation containing the correction factor is used for calculating the spatial coordinates and normal vectors of each sampling point on the aspheric lens surface, and the Poisson surface reconstruction is used for solving the lens surface corresponding to the sampling point;
step 2, loading the three-dimensional model constructed in the step 1 into a differentiable rendering system based on ray tracing, and rendering a generated image of a preset scene after passing through an aspheric lens;
step 3, perfecting an internal calculation graph of the differentiable rendering system, and establishing a mapping relation between a generated image and design parameters of an aspheric lens;
and 4, calculating loss functions of the generated image and the corresponding reference image, and optimizing design parameters of the aspheric lens by a gradient descent method.
The method simulates the behavior of light in the real world and the interaction process with an imaging system through ray tracing, and obtains the rendering result of a scene after passing through an aspheric lens; then, by means of the characteristics of a differentiable rendering system, the gradient of the rendering result relative to the design parameters of the aspheric lens is obtained; and finally, optimizing design parameters according to the average absolute error of the generated image and the reference image based on a gradient descent method. Compared with other existing aspheric lens design methods, the method gets rid of the assumption of paraxial approximation, and can obtain a more real and accurate imaging result; the invention has high expansibility and can embed a corresponding image reconstruction module according to an application scene.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a training flow diagram of one embodiment.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1 and 2, a method for designing an aspheric lens based on differential rendering according to this embodiment includes the following specific steps:
step 1, modeling an aspheric lens by using a spherical equation H containing a correction factor, and then obtaining a corresponding three-dimensional model by using a Poisson surface reconstruction algorithm phi, wherein the former calculates a space coordinate V and a normal vector N of a sampling point on the lens surface, and the latter solves a lens surface F corresponding to the sampling point.
Given a cartesian coordinate system (x, y, z), the z-axis coincides with the optical axis, and (x, y) constitutes a plane perpendicular to the optical axis, where p is x2+y2The spherical equation with the correction factor is as follows:
Figure BDA0003616244250000021
wherein c represents an aspherical lens surfaceWith κ representing the conic coefficient, a2iRepresenting higher order coefficients in the correction factor. The Poisson surface reconstruction algorithm is a reconstruction method combining the advantages of global matching and local matching, and the core idea is to construct an implicit surface with high fitting degree by converting discrete sampling point information of the surface of an aspheric lens into a continuous integrable surface function.
This embodiment first uniformly samples N on the entrance pupil plane with diameter DsampleThe point, spherical equation H containing correction factor, obtains the coordinate z of each sampling point on the optical axis according to the coordinates (x, y) of each sampling point on the entrance pupil plane, and is specifically expressed as
z=H(x,y,θ)
Here, θ denotes an initial design parameter of the aspherical lens. Then, a normal vector n corresponding to each sampling point can be solved according to the implicit equation of the aspheric lens, which is specifically expressed as:
f(x,y,z)=H(x,y,θ)-z
Figure BDA0003616244250000022
after the calculation is finished, obtaining the space coordinates V and normal vectors N corresponding to all sampling points, wherein the respective sizes are Nsample×3。
And solving a lens surface F consisting of triangular surface patches by adopting a Poisson surface reconstruction algorithm phi according to the calculated sampling point information:
F=Φ(V,N)
the poisson reconstruction surface algorithm is the prior art, and is not described in detail. Finally, the three-dimensional model corresponding to the aspheric lens is composed of { V, N, F } and is stored as a geometric figure file format OBJ, so that the subsequent program reading and writing operation is facilitated.
And 2, loading the three-dimensional model constructed in the step 1 into a ray tracing-based differentiable rendering system, and rendering a generated image of a preset scene after the preset scene passes through an aspheric lens.
First, several RGB images with 2048 × 1080 size are selected from the DIV2K dataset and randomly cropped into multiple RGB images with 128 × 128 size to serve as reference images.
And then, building a three-dimensional preset scene in a differentiable rendering system, specifically, placing a rectangular plate taking the RGB image as a texture mapping at the origin of a space coordinate system, taking two surface light sources as main light sources of the scene, and taking an image generated on a sensor after the texture mapping of the rectangular plate passes through an aspheric lens as an output result.
The differentiable rendering system includes: the aspheric lens module is used for reading the three-dimensional model corresponding to the aspheric lens and placing the three-dimensional model between a preset scene and the sensor; and the pixel coloring module is used for emitting light rays from the pixels of the sensor, tracking the paths of the light rays passing through the preset scene to calculate the color of the corresponding pixels, and rendering the generated image of the preset scene after passing through the aspheric lens. The differentiable rendering system takes into account reflection and refraction phenomena that rays may encounter during propagation, and thus can render accurate shadows, recursive reflections and refractions. The working principle makes the design method of the embodiment get rid of the assumption of paraxial approximation, and can consider the aberration of the paraxial and the abaxial simultaneously.
Here, the image information I generated by the sensor may be specifically expressed as:
I(x,y)=∫Q(λ)·[p(x,y,d,λ)*s(x,y,d)]dλ+n(x,y)
the point spread function p (x, y, d, λ) is a function of the spatial location (x, y) on the sensor, d is the depth of the scene, and the incident spectral distribution. Q is the sensor spectral response value, and s (x, y, d) and (x, y) represent the implicit representation of the scene and the metric noise, respectively. Operator denotes convolution.
Step 3, perfecting an internal calculation graph of the differentiable rendering system, and establishing a mapping relation between a rendering result and lens design parameters such as curvature, cone coefficient and the like; the method specifically comprises the following steps:
firstly, through analyzing a calculation chart established by an aspheric lens module and a pixel coloring module in the system, the image is found to be read only by the calculation chart, and the generated image I only can be read according to the gradient of each vertex position V on the aspheric lens
Figure BDA0003616244250000031
Then, according to the spherical equation containing the correction factor in step 1, a mapping relation is established for the vertex position V of the aspheric lens and the design parameter theta:
V=H(θ)
θ={c,κ}
an indirect mapping relation is established between the generated image and the design parameters of the aspheric lens through a chain rule:
Figure BDA0003616244250000041
finally, gradient return from the generated image to the aspheric lens design parameters is realized, and the lens design parameters are conveniently and continuously updated in iteration.
And 4, calculating a loss function of the generated image and the reference image in the image comparison module, then calculating the gradient of the loss function relative to the design parameters of the aspheric lens in the parameter updating module, and finally performing iterative optimization on the design parameters of the aspheric lens through a gradient descent method. The loss function used in this embodiment is the L1 loss function, i.e., the average absolute error.
LOSS=||I-Iref||1
Where I is the image of the scene generated on the sensor after passing through the lens, IrefIs the corresponding reference picture.

Claims (6)

1. A method for designing an aspheric lens based on differential rendering is characterized by comprising the following steps:
step 1, modeling an aspheric lens by using a spherical equation containing a correction factor, and then obtaining a corresponding three-dimensional model by using a Poisson surface reconstruction algorithm, wherein the spherical equation containing the correction factor is used for calculating the spatial coordinates and normal vectors of each sampling point on the aspheric lens surface, and the Poisson surface reconstruction is used for solving the lens surface corresponding to the sampling point;
step 2, loading the three-dimensional model constructed in the step 1 into a differentiable rendering system based on ray tracing, and rendering a generated image of a preset scene after passing through an aspheric lens;
step 3, perfecting an internal calculation graph of the differentiable rendering system, and establishing a mapping relation between a generated image and design parameters of an aspheric lens;
and 4, calculating loss functions of the generated image and the corresponding reference image, and optimizing design parameters of the aspheric lens by a gradient descent method.
2. An aspheric lens design method based on differential rendering as claimed in claim 1, characterized in that in step 1, a cartesian coordinate system (x, y, z) is given, the z-axis coincides with the optical axis, (x, y) forms a plane perpendicular to the optical axis, and p ═ x2+y2The spherical equation with the correction factor is as follows:
Figure FDA0003616244240000011
where c denotes the central curvature of the aspherical lens surface,. kappa.denotes the conic coefficient,. a2iRepresenting higher order coefficients in the correction factor.
3. The method as claimed in claim 1, wherein in step 1, the poisson surface reconstruction algorithm transforms discrete sampling point information of the aspheric lens surface onto a continuously integrable surface function, so as to construct an implicit surface with a high fitting degree.
4. The method as claimed in claim 1, wherein in step 2, the ray tracing-based aspheric lens design system comprises:
the aspheric lens module is used for reading the three-dimensional model corresponding to the aspheric lens and placing the three-dimensional model between a preset scene and the sensor;
and the pixel coloring module is used for emitting light rays from the pixels of the sensor, tracking the paths of the light rays passing through the preset scene to calculate the color of the corresponding pixels, and rendering the generated image of the preset scene after passing through the aspheric lens.
5. The method of claim 4, wherein in step 3, the step of refining the internal computation graph of the differential rendering system is as follows:
firstly, through analyzing a calculation chart established by an aspheric lens module and a pixel coloring module in the system, the image is found to be read only by the calculation chart, and the generated image I only can be read according to the gradient of each vertex position V on the aspheric lens
Figure FDA0003616244240000021
Then, according to the spherical equation containing the correction factor in step 1, a mapping relation is established for the vertex position V of the aspheric lens and the design parameter theta:
V=H(θ)
θ={c,κ}
an indirect mapping relation is established between the generated image and the design parameters of the aspheric lens through a chain rule:
Figure FDA0003616244240000022
finally, gradient return from the generated image to the aspheric lens design parameters is realized, and the lens design parameters are conveniently and continuously updated in iteration.
6. The method of claim 1, wherein in step 4, the optimization of aspheric lens design parameters is constrained by mean absolute error:
LOSS=||I-Iref||1
wherein, I is the scene passing through the aspheric lensImage generated on the sensor, IrefIs the corresponding reference picture.
CN202210444779.0A 2022-04-26 2022-04-26 Aspheric lens design method based on differential rendering Pending CN114741812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210444779.0A CN114741812A (en) 2022-04-26 2022-04-26 Aspheric lens design method based on differential rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210444779.0A CN114741812A (en) 2022-04-26 2022-04-26 Aspheric lens design method based on differential rendering

Publications (1)

Publication Number Publication Date
CN114741812A true CN114741812A (en) 2022-07-12

Family

ID=82283961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210444779.0A Pending CN114741812A (en) 2022-04-26 2022-04-26 Aspheric lens design method based on differential rendering

Country Status (1)

Country Link
CN (1) CN114741812A (en)

Similar Documents

Publication Publication Date Title
US20230059839A1 (en) Quotidian scene reconstruction engine
US20110304745A1 (en) Light transport reconstruction from sparsely captured images
KR100681320B1 (en) Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition
US20220335636A1 (en) Scene reconstruction using geometry and reflectance volume representation of scene
CN113269862A (en) Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment
CN113689578B (en) Human body data set generation method and device
WO2022205627A1 (en) End-to-end imaging device design method, and apparatus
CN112862736B (en) Real-time three-dimensional reconstruction and optimization method based on points
CN115880443B (en) Implicit surface reconstruction method and implicit surface reconstruction equipment for transparent object
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
Zhuang et al. A dense stereo matching method based on optimized direction-information images for the real underwater measurement environment
JP2007115256A (en) Method for designing electro-optic image processing system, storage medium containing instruction causing computer to execute this method, and optical subsystem
CN115810112A (en) Image processing method, image processing device, storage medium and electronic equipment
US10074195B2 (en) Methods and apparatuses of lens flare rendering using linear paraxial approximation, and methods and apparatuses of lens flare rendering based on blending
CN113096039A (en) Depth information completion method based on infrared image and depth image
CN115147709B (en) Underwater target three-dimensional reconstruction method based on deep learning
CN114741812A (en) Aspheric lens design method based on differential rendering
CN108416815B (en) Method and apparatus for measuring atmospheric light value and computer readable storage medium
CN115202477A (en) AR (augmented reality) view interaction method and system based on heterogeneous twin network
CN115423927A (en) ViT-based multi-view 3D reconstruction method and system
CN114863021A (en) Simulation data set analysis method and system based on three-dimensional reconstruction scene
CN115035193A (en) Bulk grain random sampling method based on binocular vision and image segmentation technology
KR102345607B1 (en) Design method of optical element and design apparatus thereof
CN114663683A (en) Underwater target detection method based on spatial feature self-supervision
CN114549374A (en) De-noising an image rendered using Monte Carlo rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination