CN113419342A - Free illumination optical design method based on deep learning - Google Patents

Free illumination optical design method based on deep learning Download PDF

Info

Publication number
CN113419342A
CN113419342A CN202110746595.5A CN202110746595A CN113419342A CN 113419342 A CN113419342 A CN 113419342A CN 202110746595 A CN202110746595 A CN 202110746595A CN 113419342 A CN113419342 A CN 113419342A
Authority
CN
China
Prior art keywords
data
network
deep learning
illumination optical
light spot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110746595.5A
Other languages
Chinese (zh)
Inventor
李嫄源
母星宇
李鹏华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110746595.5A priority Critical patent/CN113419342A/en
Publication of CN113419342A publication Critical patent/CN113419342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines

Abstract

The invention relates to a free illumination optical design method based on deep learning, which belongs to the field of deep learning and free optics and comprises the following steps: s1: drawing the required light spot shape by drawing software, and storing the light spot shape as a training sample; s2: constructing a network model based on the Unet network; s3: setting up an environment, and setting initial parameters of a model for debugging; s4: inputting the training samples into a network, and adjusting model parameters through continuous optimization to obtain a network model with good convergence; s5: inputting the target light spot image into a network, generating a lens data txt file through multiple iterative fitting, and carrying out optical simulation verification on the data to obtain a final effect. The invention can achieve better effect for solving the inverse problem in free illumination optical design.

Description

Free illumination optical design method based on deep learning
Technical Field
The invention belongs to the field of deep learning and free optics, and relates to a free illumination optical design method based on deep learning.
Background
Free-form optics refers to optics whose surface shape lacks translational or rotational symmetry about an axis perpendicular to the mean plane. The new technology of optical construction with free-form surfaces enables designers and engineers to break away from the geometrical constraints of optical surfaces, achieving compact, lightweight and efficient illumination systems with excellent optical performance. With the wider and wider knowledge of the advantages of free optics in optical design and the application of free optics in optical systems, the design strategy of free optics becomes especially important.
In the design of the illumination system, the free-form surface is adopted for design, so that secondary light distribution of a light source can be effectively realized, a required illumination light spot is obtained, and meanwhile, the energy utilization rate is improved. The design of free-illumination optics can be expressed as one or more free-form surfaces through which light rays emitted from a light source are redirected to produce a prescribed illumination, given a light source and a prescribed illumination. This is in fact an inverse problem, namely to set the free-form surface according to the desired lighting effect. When the influence of the spatial range or the angular range of the light source can be ignored, the light source can be regarded as an ideal light source (a point light source or a parallel light beam), the inverse problem is converted into a mathematical problem with definite definition, and complex solving calculation is carried out to obtain data of a free-form surface.
The neural network has many layers and wide width, and can be mapped to any function theoretically, so that the problem of complexity can be solved. The neural network is highly dependent on data, and self-adaptive learning is carried out by using a large amount of data, and the larger the data amount is, the better the performance is. Therefore, the neural network model is urgently needed to be researched to solve the inverse problem in the free illumination optical design.
Disclosure of Invention
In view of this, the present invention provides a free illumination optical design method based on deep learning, which avoids complex solution calculation and has better versatility.
In order to achieve the purpose, the invention provides the following technical scheme:
a free illumination optical design method based on deep learning comprises the following steps:
s1: drawing the required light spot shape by drawing software, and storing the light spot shape as a training sample;
s2: constructing a network model based on the Unet network;
s3: setting up an environment, and setting initial parameters of a model for debugging;
s4: inputting the training samples into a network, and adjusting model parameters through continuous optimization to obtain a network model with good convergence;
s5: inputting the target light spot image into a network, generating a lens data txt file through multiple iterative fitting, and carrying out optical simulation verification on the data to obtain the final effect
Further, in step S1, the light spot image is a gray scale image, the light spot is a white background and is black, and a black edge is left around the light spot, and the light spot image is saved to 256 × 256 pixels in size in any image format.
Further, step S2 specifically includes the following steps:
s21: establishing a full convolution Unet network;
the entire process of the Unet is encoding and decoding, the convolution layer is used for extracting features to obtain the information of each pixel point, and the overlapping result can perfectly separate pictures with any size and can predict elements on the boundary of the pictures through mirror images. Downsampling can increase robustness to some small disturbances of the input image, such as image translation, rotation and the like, reduce the risk of overfitting, reduce the amount of computation, and increase the size of the receptive field. The maximum effect of the upsampling is to restore and decode the abstract features to the size of the original image, and finally obtain a segmentation result. The shallower high resolution layer is used to solve the pixel localization problem and the deeper layer is used to solve the pixel classification problem.
S22: programming the imaging process of the lens under certain optical conditions;
s23: a new function is introduced as a loss calculation function.
Further, in step S22, the influence of the spatial range or the angular range of the light source is ignored, the light source is regarded as an ideal light source, the nurbs curved surface is derived, that is, the light ray data is calculated by using the curved surface data, and the calculation is implemented by programming.
Further, the loss function defined in step S23 is:
correlation function corr2 of the two matrices:
Figure BDA0003143692180000021
Figure BDA0003143692180000022
is the average value of the matrix a and,
Figure BDA0003143692180000023
is the average of the matrix B.
Further, in step S2, the network model is a full convolution Unet + lens imaging + loss function, the data stream is spot data-lens data-spot data, and the lens data is saved.
The invention has the beneficial effects that:
a new loss function corr2 is proposed, which corr2 shows better performance in this model than the conventional loss function. The lens imaging process is programmed to be combined with the Unet to form a data flow circulation process of facula data-lens data-facula data, and the Unet calculation process can be close to the inverse calculation process of the lens imaging process through multiple iterations under the limitation of a loss function. The invention relates to a free illumination optical design method based on deep learning, which fuses full convolution Unet, nurbs curved surface imaging and custom loss functions into a brand new network model and can achieve better effect on solving the inverse problem in free illumination optical design.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a full convolution Unet according to the present invention;
FIG. 2 is a schematic diagram of a network model according to the present invention;
FIG. 3 is a schematic operational flow diagram.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Referring to fig. 1 to 3, the details of the present invention are as follows:
a free illumination optical design method based on deep learning comprises the following steps:
(1) drawing the required spot shape by drawing software according to the requirement, and storing the spot shape as a training sample according to the specific requirement.
(2) And establishing a full convolution Unet network.
The entire process of the Unet is encoding and decoding, the convolution layer is used for extracting features to obtain the information of each pixel point, and the overlapping result can perfectly separate pictures with any size and can predict elements on the boundary of the pictures through mirror images. Downsampling can increase robustness to some small disturbances of the input image, such as image translation, rotation and the like, reduce the risk of overfitting, reduce the amount of computation, and increase the size of the receptive field. The maximum effect of the upsampling is to restore and decode the abstract features to the size of the original image, and finally obtain a segmentation result. The shallower high resolution layer is used to solve the pixel localization problem and the deeper layer is used to solve the pixel classification problem.
(3) Under certain optical conditions, the lens imaging process is programmed to be realized.
When the influence of the spatial range or the angular range of the light source can be ignored, the light source can be regarded as an ideal light source (a point light source or a parallel light beam), and the derivation of the nurbs curved surface is performed, that is, the light data is calculated by using the curved surface data and is realized by programming.
(4) A new function is introduced as a loss calculation function.
Correlation function corr2 of the two matrices:
Figure BDA0003143692180000041
Figure BDA0003143692180000042
is the average value of the matrix a and,
Figure BDA0003143692180000043
is the average of the matrix B.
(5) And combining the codes of all parts to form an integral network model. The whole network model is a full convolution Unet + lens imaging + loss function, the data flow is light spot data-lens data-light spot data, and the lens data are stored.
(6) And inputting the training samples into the network, and adjusting model parameters through continuous optimization to obtain a network model with good convergence.
(7) Inputting the target light spot image into a network, generating a lens data txt file through multiple iterative fitting, and carrying out optical simulation verification on the data to obtain a final effect.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (6)

1. A free illumination optical design method based on deep learning is characterized in that: the method comprises the following steps:
s1: drawing the required light spot shape by drawing software, and storing the light spot shape as a training sample;
s2: constructing a network model based on the Unet network;
s3: setting up an environment, and setting initial parameters of a model for debugging;
s4: inputting the training samples into a network, and adjusting model parameters through continuous optimization to obtain a network model with good convergence;
s5: inputting the target light spot image into a network, generating a lens data txt file through multiple iterative fitting, and carrying out optical simulation verification on the data to obtain a final effect.
2. The deep learning based free-illumination optical design method according to claim 1, characterized in that: in step S1, the light spot image is a gray scale image, the light spot is a white background and is black, and a black border is left around the light spot, and the light spot image is saved to 256 × 256 pixels in size in any image format.
3. The deep learning based free-illumination optical design method according to claim 1, characterized in that: step S2 specifically includes the following steps:
s21: establishing a full convolution Unet network;
s22: programming the imaging process of the lens under certain optical conditions;
s23: a new function is introduced as a loss calculation function.
4. The deep learning based free-illumination optical design method according to claim 3, wherein: in step S22, the influence of the spatial range or the angular range of the light source is ignored, the light source is regarded as an ideal light source, the nudbs surface is deduced, that is, the light ray data is calculated by using the surface data, and the calculation is implemented by programming.
5. The deep learning based free-illumination optical design method according to claim 3, wherein: the loss function defined in step S23 is:
correlation function corr2 of the two matrices:
Figure FDA0003143692170000011
Figure FDA0003143692170000012
is the average value of the matrix a and,
Figure FDA0003143692170000013
is the average of the matrix B.
6. The deep learning based free-illumination optical design method according to claim 3, wherein: in step S2, the network model is a full convolution Unet + lens imaging + loss function, and the data stream is spot data-lens data-spot data, and stores the lens data.
CN202110746595.5A 2021-07-01 2021-07-01 Free illumination optical design method based on deep learning Pending CN113419342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110746595.5A CN113419342A (en) 2021-07-01 2021-07-01 Free illumination optical design method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110746595.5A CN113419342A (en) 2021-07-01 2021-07-01 Free illumination optical design method based on deep learning

Publications (1)

Publication Number Publication Date
CN113419342A true CN113419342A (en) 2021-09-21

Family

ID=77720071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110746595.5A Pending CN113419342A (en) 2021-07-01 2021-07-01 Free illumination optical design method based on deep learning

Country Status (1)

Country Link
CN (1) CN113419342A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102121665A (en) * 2010-12-31 2011-07-13 北京航空航天大学 Structure design method of free curved surface lens for outdoor LED (light-emitting diode) illumination
CN104317053A (en) * 2014-11-18 2015-01-28 重庆邮电大学 Free-form surface lens construction method based on lighting of LED desk lamp
CN108345107A (en) * 2017-01-24 2018-07-31 清华大学 The design method of free form surface lighting system
US20190041634A1 (en) * 2016-02-04 2019-02-07 Digilens, Inc. Holographic Waveguide Optical Tracker
CN109712081A (en) * 2018-11-14 2019-05-03 浙江大学 A kind of semantic Style Transfer method and system merging depth characteristic
WO2019094562A1 (en) * 2017-11-08 2019-05-16 Google Llc Neural network based blind source separation
CN109870803A (en) * 2017-12-01 2019-06-11 乐达创意科技股份有限公司 The production method of freeform optics surface structure
CN110161682A (en) * 2019-05-31 2019-08-23 北京理工大学 A kind of free form surface off axis reflector system initial configuration generation method
CN110349095A (en) * 2019-06-14 2019-10-18 浙江大学 Learn the adaptive optics wavefront compensation method of prediction wavefront zernike coefficient based on depth migration
US20200160997A1 (en) * 2018-11-02 2020-05-21 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
CN111487769A (en) * 2020-04-25 2020-08-04 复旦大学 Method for designing total internal reflection lens for customized illumination
US20200281460A1 (en) * 2019-03-07 2020-09-10 eyeBrain Medical, Inc. Integrated progressive lens simulator
CN111814405A (en) * 2020-07-23 2020-10-23 臻准生物科技(上海)有限公司 Deep learning-based lighting system design method and system
US20200340901A1 (en) * 2019-04-24 2020-10-29 The Regents Of The University Of California Label-free bio-aerosol sensing using mobile microscopy and deep learning
US20210063730A1 (en) * 2018-05-11 2021-03-04 Arizona Board Of Regents On Behalf Of The University Of Arizona Efficient optical system design and components

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102121665A (en) * 2010-12-31 2011-07-13 北京航空航天大学 Structure design method of free curved surface lens for outdoor LED (light-emitting diode) illumination
CN104317053A (en) * 2014-11-18 2015-01-28 重庆邮电大学 Free-form surface lens construction method based on lighting of LED desk lamp
US20190041634A1 (en) * 2016-02-04 2019-02-07 Digilens, Inc. Holographic Waveguide Optical Tracker
CN108345107A (en) * 2017-01-24 2018-07-31 清华大学 The design method of free form surface lighting system
WO2019094562A1 (en) * 2017-11-08 2019-05-16 Google Llc Neural network based blind source separation
CN109870803A (en) * 2017-12-01 2019-06-11 乐达创意科技股份有限公司 The production method of freeform optics surface structure
US20210063730A1 (en) * 2018-05-11 2021-03-04 Arizona Board Of Regents On Behalf Of The University Of Arizona Efficient optical system design and components
US20200160997A1 (en) * 2018-11-02 2020-05-21 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
CN109712081A (en) * 2018-11-14 2019-05-03 浙江大学 A kind of semantic Style Transfer method and system merging depth characteristic
US20200281460A1 (en) * 2019-03-07 2020-09-10 eyeBrain Medical, Inc. Integrated progressive lens simulator
US20200340901A1 (en) * 2019-04-24 2020-10-29 The Regents Of The University Of California Label-free bio-aerosol sensing using mobile microscopy and deep learning
CN110161682A (en) * 2019-05-31 2019-08-23 北京理工大学 A kind of free form surface off axis reflector system initial configuration generation method
CN110349095A (en) * 2019-06-14 2019-10-18 浙江大学 Learn the adaptive optics wavefront compensation method of prediction wavefront zernike coefficient based on depth migration
CN111487769A (en) * 2020-04-25 2020-08-04 复旦大学 Method for designing total internal reflection lens for customized illumination
CN111814405A (en) * 2020-07-23 2020-10-23 臻准生物科技(上海)有限公司 Deep learning-based lighting system design method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG T 等: "direct generation of starting points for freeform off-axis three-mirror imaging system design using neural network based deep-learning", 《OPTICS EXPRESS》 *
张斌等: "基于卷积神经网络的手势识别算法设计与实现", 《微型机与应用》 *

Similar Documents

Publication Publication Date Title
US10884495B2 (en) Light field display, adjusted pixel rendering method therefor, and vision correction system and method using same
US11257272B2 (en) Generating synthetic image data for machine learning
US9519144B2 (en) System, method, and computer program product to produce images for a near-eye light field display having a defect
EP2963464B1 (en) Design of a refractive surface
Asayama et al. Fabricating diminishable visual markers for geometric registration in projection mapping
CN114255197B (en) Infrared and visible light image self-adaptive fusion alignment method and system
CN112862736A (en) Real-time three-dimensional reconstruction and optimization method based on points
CN114787828A (en) Artificial intelligence neural network inference or training using imagers with intentionally controlled distortion
Yaldiz et al. Deepformabletag: End-to-end generation and recognition of deformable fiducial markers
Song et al. Weakly-supervised stitching network for real-world panoramic image generation
Ren et al. Object insertion based data augmentation for semantic segmentation
Luo et al. Latr: 3d lane detection from monocular images with transformer
CN113419342A (en) Free illumination optical design method based on deep learning
EP3529654B1 (en) Optic, luminaire and method for fabricating optic
CN114022529B (en) Depth perception method and device based on self-adaptive binocular structured light
CN114565508B (en) Virtual reloading method and device
Lu et al. Multi-view based neural network for semantic segmentation on 3D scenes
Li et al. Monocular 3-D Object Detection Based on Depth-Guided Local Convolution for Smart Payment in D2D Systems
CN113673567B (en) Panorama emotion recognition method and system based on multi-angle sub-region self-adaption
CN112634456A (en) Real-time high-reality drawing method of complex three-dimensional model based on deep learning
Evdokimova et al. Meta-Learning Approach in Diffractive Lens Computational Imaging
CN114326323B (en) Compound eye matching method in reflective integral lighting system
CN110276825A (en) A kind of three-dimensional facial reconstruction method based on template deformation
Li et al. Hybrid Feature based Pyramid Network for Nighttime Semantic Segmentation.
CN113409436B (en) Volume rendering method for diamond pixel arrangement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921

RJ01 Rejection of invention patent application after publication