CN208547775U - The device shown for realizing the nearly eye of 3-D image - Google Patents

The device shown for realizing the nearly eye of 3-D image Download PDF

Info

Publication number
CN208547775U
CN208547775U CN201820848042.4U CN201820848042U CN208547775U CN 208547775 U CN208547775 U CN 208547775U CN 201820848042 U CN201820848042 U CN 201820848042U CN 208547775 U CN208547775 U CN 208547775U
Authority
CN
China
Prior art keywords
light
waveguide
image
virtual reality
microlens array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201820848042.4U
Other languages
Chinese (zh)
Inventor
陈林森
乔文
朱鸣
万文强
张云莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
SVG Tech Group Co Ltd
Original Assignee
Suzhou University
SVG Optronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University, SVG Optronics Co Ltd filed Critical Suzhou University
Priority to CN201820848042.4U priority Critical patent/CN208547775U/en
Application granted granted Critical
Publication of CN208547775U publication Critical patent/CN208547775U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The utility model relates to display technologies, in particular to the device for realizing the all-round display of naked eye three-dimensional image.Device according to the all-round display for realizing naked eye three-dimensional image of the utility model one aspect includes: light field reproduction unit, is configured to the field information of reconstruct target object to reproduce virtual scene;And virtual reality fusion unit, it is configured as output to the 3-D image for being fused together the virtual scene and true scene.

Description

The device shown for realizing the nearly eye of 3-D image
Technical field
The present invention relates to display technologies, in particular to the device shown for realizing the nearly eye of 3-D image.
Background technique
As a kind of approach that three-dimensional imaging is shown, integration imaging technology is received more and more attention.Three-dimensional is aobvious The technology of showing is divided into record and reproduces two processes.Traditional Three-dimensional Display is reconstructed for three-dimension object image, is reconstructing Influence in journey vulnerable to the stray light of optical element.It (is uniformly arranged in the horizontal and vertical directions using microlens array mode The lenticule unit of column) it can get the steric information of 3D rendering and not by stray light.The steric information of 3-D image passes through Lenticule images in the focal plane of microlens array, can record on focal plane and obtains image primitive.One is placed at lenticule rear Image display, due to light reversibility pricinple, the light that image primitive transmits in display is restored through micro lens, can be The space image of reproducing three-dimensional near lenticule.
Chinese patent application (the publication number of entitled " a kind of integration imaging 3D display microlens array and its 3D production method " CN104407442A a kind of combination by one layer of hole diaphragm of setting among two layers of microlens array) is disclosed, although improving The depth that 3-D image is shown, but the influence that stray light records system and reproduces can not be eliminated.
Chinese patent (the patent No. of entitled " disparity barrier and the three-dimensional display apparatus using disparity barrier " 200610094535.5) a kind of 3D display device of disparity barrier, the left-eye image display portion and right eye of display are disclosed Image displaying part reaches the image of human body left and right eye there are certain parallax, passes through observer under the action of disparity barrier Brain merges left and right two images, forms three-dimensional scence.This Three-dimensional Display realized by binocular parallax principle is simply easy It realizes, but since axis concentrate around one point, as spokes on acis contradiction etc. influences, is easy to cause dizziness, the wearing of nearly eye display device is made to have a greatly reduced quality.
Summary of the invention
It is an object of the present invention to provide a kind of device shown for realizing the nearly eye of 3-D image, have manufacture at The advantages that this is low, design is easy and compact-sized.
Include according to the device shown for realizing the nearly eye of 3-D image of one aspect of the invention:
Light field reproduction unit is configured to the field information of reconstruct target object to reproduce virtual scene;And
Virtual reality fusion unit is configured as output to the three-dimensional figure for being fused together the virtual scene and true scene Picture.
Preferably, further comprise projecting cell in above-mentioned apparatus, be configured to the void for exporting light field reproduction unit Quasi- scene is sent to virtual reality fusion unit.
Preferably, in above-mentioned apparatus, the light field reproduction unit includes:
At least one spatial light modulator;And
The microlens array being arranged on the light direction of the spatial light modulator,
Wherein, the spatial light modulator is divided into multiple sub-image areas, by making from each subgraph As the light in region loads spatial information through the refraction of each lenticule unit of the microlens array.
Preferably, in above-mentioned apparatus, the light field reproduction unit includes:
Multiple spatial light modulators, are spliced together;And
Multiple microlens arrays, each microlens array are set to the light direction of respective associated spatial light modulator On,
Wherein, each spatial light modulator is divided into multiple sub-image areas, by making from each described Refraction of the light of sub-image area through each lenticule unit of associated microlens array and load spatial information.
Preferably, in above-mentioned apparatus, the spatial light modulator is one of the following: DLP display screen, LCOS are shown Screen or liquid crystal display.
Preferably, in above-mentioned apparatus, the microlens array uses the form of arc-shaped electrode to realize lenticule unit Focusing function.
Preferably, in above-mentioned apparatus, the range of the lenticule unit size of the microlens array is 0.01mm- 10mm。
Preferably, in above-mentioned apparatus, the light field reproduction unit further comprises being fitted in the microlens array The wherein Fresnel Lenses of side.
Preferably, in above-mentioned apparatus, the virtual reality fusion unit includes waveguide, first nanometer be set to inside waveguide Grating and the second nanometer grating, wherein first nanometer grating makes the light entered that diffraction occur, and the waveguide makes by first The light of nanometer grating diffraction is totally reflected, second nanometer grating make the light of total reflection occur diffraction with by light from waveguide Guide visible area into.
Preferably, in above-mentioned apparatus, the virtual reality fusion unit includes prism, waveguide and is set to receiving inside waveguide Rice grating, wherein the prism makes incident ray be refracted into waveguide, and the waveguide makes the light total reflection of refraction, described to receive Rice grating makes the light of total reflection that diffraction occur to guide light into visible area from waveguide.
Preferably, in above-mentioned apparatus, the virtual reality fusion unit include prism, waveguide and be set to inside waveguide one To partially reflecting mirror, wherein the prism makes incident ray be refracted into waveguide, and the waveguide makes the light total reflection of refraction, The partially reflecting mirror makes the light of total reflection that diffraction occur to guide light into visible area from waveguide.
Preferably, in above-mentioned apparatus, the virtual reality fusion unit includes half-reflection and half-transmission prism to guide incident ray into Visible area.
Preferably, in above-mentioned apparatus, the virtual reality fusion unit includes that free-form curved mirror can to guide incident ray into Viewed area.
Preferably, in above-mentioned apparatus, the refractive index of the waveguide is greater than the refraction for the medium that incident ray had previously passed through Rate.
Detailed description of the invention
Fig. 1 is the schematic block diagram according to the device of one embodiment of the invention shown for realizing the nearly eye of 3-D image.
Fig. 2 a be the schematic diagram that can be used for the light field reproduction unit of Fig. 1 shown device, Fig. 2 b be light field shown in Fig. 2 a again The working principle diagram of existing unit.
Fig. 3 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Fig. 4 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Fig. 5 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Fig. 6 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Fig. 7 a and 7b are the apparatus structure shown for realizing the nearly eye of 3-D image according to another embodiment of the present invention Schematic diagram.
Fig. 8 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Fig. 9 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Specific embodiment
The purpose of the present invention is described in detail below in conjunction with attached drawing.
Fig. 1 is the schematic block diagram according to the device of one embodiment of the invention shown for realizing the nearly eye of 3-D image.
The device 10 shown in FIG. 1 shown for realizing the nearly eye of 3-D image includes light field reproduction unit 110, projecting cell 120 and virtual reality fusion unit 130.In the present embodiment, light field reproduction unit 110 is configured as the light field letter of reconstruct target object Breath is to reproduce virtual scene.Projecting cell 120 is optically coupled between light field reproduction unit 110 and virtual reality fusion unit 130, It is configured as example exporting light field reproduction unit 110 by such as reflecting, reflecting or the geometric optics mode of diffraction etc Virtual scene is sent to virtual reality fusion unit 130.Virtual reality fusion unit 130 is configured as output for virtual scene and true scene The 3-D image being fused together.
It should be pointed out that optical projection system 120 is optional component.Optionally, by suitably designing, light field can be made The virtual scene that reproduction unit 110 reconstructs couples directly to virtual reality fusion unit 130.
In the present embodiment, light field reproduction unit 110 includes spatial light modulator and microlens array to realize light field It rebuilds.Preferably, one of DLP display screen, LCOS display screen and liquid crystal display can be used in spatial light modulator.
Fig. 2 a be the schematic diagram that can be used for the light field reproduction unit of Fig. 1 shown device, Fig. 2 b be light field shown in Fig. 2 a again The working principle diagram of existing unit.
Light field reproduction unit 110 shown in Fig. 2 a includes spatial light modulator 111 and microlens array 112.Spatial light tune Device 111 processed preferably uses liquid crystal display.A and 2b referring to fig. 2, microlens array 112 are arranged at spatial light modulator 111 Light direction on.Each lenticule unit in microlens array 112 can be round, square or hexagonal structure.One As in the case of, lenticule unit close-packed arrays, it is preferable that the size range of lenticule unit be 0.01mm-10mm.Liquid crystal display The pixel divided in a certain way in screen constitutes a sub-image area, by making from each such sub-image area Refraction of the light through each lenticule unit of microlens array and load spatial information.Specifically, light is through lenticule list Image after member is known as cell picture, and the lenticule of different location will generate different cell picture information, therefore each unit Image information all contains the different three-dimensional information of object.Light is can be in multiple views after microlens array 112 To clearly image, to realize the Three-dimensional Display of multi-angle of view.
Fig. 3 is the structural representation according to the device of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Device 30 shown in Fig. 3 includes light field reproduction unit 310, optical projection system 320 and virtual reality fusion unit 330.
Light field reproduction unit 310 includes liquid crystal display 311 and the microlens array positioned at liquid crystal display light-emitting surface 312.The field information of the object reproduced by light field reproduction unit 310 is for example by the diffraction of optical projection system 320, refraction or reflection Effect, is coupled to virtual reality fusion unit 330.
In the present embodiment, virtual reality fusion unit 330 includes waveguide 331, the first nanometer grating 332a and second nanometer of light Grid 332b.Referring to Fig. 3, the first nanometer grating 332a is arranged at 331 inside of waveguide and injects the position of waveguide close to light, Make the light entered that diffraction occur.Light through the first nanometer grating 332a diffraction is totally reflected inside waveguide 331.Light Multiple total reflection is undergone to reach the second nanometer grating 332b, diffraction and directive through the second nanometer grating 332b after later Thus the visible area of waveguide external exports virtual scene merging the 3-D image being fused together with true scene.
Fig. 4 is the structural representation according to the device of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Device 40 shown in Fig. 4 includes light field reproduction unit 410, optical projection system 420 and virtual reality fusion unit 430.
Light field reproduction unit 410 includes liquid crystal display 411 and the microlens array positioned at liquid crystal display light-emitting surface 412.The field information of the object reproduced by light field reproduction unit 410 is for example by the diffraction of optical projection system 420, refraction or reflection Effect, is coupled to virtual reality fusion unit 430.
It is the structure of virtual reality fusion unit in place of the main difference of the present embodiment and embodiment illustrated in fig. 3.Specifically, The virtual reality fusion unit 430 of the present embodiment includes prism 431, waveguide 432 and the nanometer grating 433 inside waveguide.Referring to Virtual scene from light field reproduction unit 410 is projected prism 431 by Fig. 4, projecting cell 420, is reflected through prism 431 laggard Enter waveguide 432.Light after refraction is totally reflected inside waveguide 432.Nanometer is reached after light experience multiple total reflection Grating 433, diffraction and the visible area of directive waveguide external through nanometer grating 4332 after, thus exports virtual scape As merging the 3-D image being fused together with true scene.
Fig. 5 is the structural representation according to the device of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
Device 50 shown in fig. 5 includes light field reproduction unit 510, optical projection system 520 and virtual reality fusion unit 530.
Light field reproduction unit 510 includes liquid crystal display 511 and the microlens array positioned at liquid crystal display light-emitting surface 512.The field information of the object reproduced by light field reproduction unit 510 is for example by the diffraction of optical projection system 520, refraction or reflection Effect, is coupled to virtual reality fusion unit 530.
It is the structure of virtual reality fusion unit in place of the main difference of the present embodiment and embodiment illustrated in fig. 3.Specifically, The virtual reality fusion unit 530 of the present embodiment includes prism 531, waveguide 532 and a pair of of partially reflecting mirror inside waveguide 533a and 533b.Referring to Fig. 5, the virtual scene from light field reproduction unit 510 is projected prism 531 by projecting cell 520, warp Prism 531 enters waveguide 532 after reflecting.Light after refraction is totally reflected inside waveguide 532.Light experience is repeatedly all-trans Reach partially reflecting mirror 533a after penetrating, a part of light reflects and visible area outside guided waveguides through reflecting mirror 533a Domain, rest part permeation parts reflecting mirror 533a reach partially reflecting mirror 533b and through reflecting mirror 533b reflect and outside guided waveguides Thus the visible area in portion exports virtual scene merging the 3-D image being fused together with true scene.
Fig. 6 is the structural representation according to the device of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
It is light field reproduction unit in place of the main difference of the present embodiment and embodiment illustrated in fig. 3.Specifically, in this reality It applies in example, light field reproduction unit 610 includes liquid crystal display 611 and the microlens array 612 positioned at liquid crystal display light-emitting surface, Wherein, thus microlens array 612 can be loaded into electrode voltage size and liquid using the form of arc-shaped electrode by changing Brilliant quick response, realizes the focusing function of lenticule to increase the depth of field of three-dimensional scenic.
Above by embodiment shown in Fig. 3-6, it is preferable that the refractive index of waveguide is greater than incident ray and previously passed through Medium refractive index.
Fig. 7 a and 7b are the apparatus structure shown for realizing the nearly eye of 3-D image according to another embodiment of the present invention Schematic diagram.
The device 70 of the present embodiment includes light field reproduction unit, optical projection system and virtual reality fusion unit.As shown in Figure 7a, light Field reproduction unit 710 includes multiple spatial light modulator 711a-711c being stitched together and multiple microlens array 712a- 712c.Each of microlens array 712a-712c is set on the light direction of respective associated spatial light modulator.Equally Ground wherein the pixel divided in a certain way constitutes a sub-image area, comes from each spatial light modulator by making Light in each sub-image area, which loads space through the refraction of each lenticule unit of associated microlens array, to be believed Breath.Light field reproduction unit arrangement mode as shown in Figure 7a to can get the stereo-picture that visual angle increases in eye-observation region Display matches the entrance pupil and virtual reality fusion eyeglass emergent pupil of corresponding optical projection system, in the image that eye-observation arrives in practical applications Output area is shown without the mobile image that can be obtained extensive angle in head.During reconstruction of optical wave field, to make field information weight It is built in the center of optical projection system, as shown in Figure 7b, is fitted with Fresnel Lenses in each of microlens array 712a-712c 713 with will by the light focusing of microlens array at system centre, thus effectively improve display image brightness.
Fig. 8 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.
The device 80 of the present embodiment includes light field reproduction unit 810, optical projection system 820 and virtual reality fusion unit 830.
Light field reproduction unit 810 includes liquid crystal display 811 and the microlens array positioned at liquid crystal display light-emitting surface 812.The field information of the object reproduced by light field reproduction unit 310 is for example by the diffraction of optical projection system 820, refraction or reflection Effect, is coupled to virtual reality fusion unit 830.
It is the structure of virtual reality fusion unit in place of the main difference of the present embodiment and embodiment illustrated in fig. 3.Specifically, The virtual reality fusion unit 830 of the present embodiment is that semi-permeable and semi-reflecting mirror is merged with exporting to merge virtual scene with true scene one The 3-D image risen.
Fig. 9 is to illustrate according to the apparatus structure of another embodiment of the present invention shown for realizing the nearly eye of 3-D image Figure.Compared with shown in Fig. 8, device 90 shown in Fig. 9 replaces the semi-permeable and semi-reflecting mirror 830 in Fig. 8 with free-form curved mirror 920.
Compared with prior art, the device of the invention shown for realizing the nearly eye of 3-D image has many advantages, such as.Example Such as, the nearly eye display device of the invention based on microlens array can automatically generate three-dimensional image, easy to operate, compact-sized And it can be worked under incoherent light source without special illumination light, and provide continuous parallax and observation to observer Point.
Described above is the principle of the present invention and preferred embodiment.However, the present invention should not be construed as limited to be discussed Specific embodiment.Above-mentioned preferred embodiment be considered as it is illustrative and not restrictive, and should understand that When, those skilled in the art, can be under the premise of without departing from the following scope of the claims of the invention as defined Variation is made in these embodiments.

Claims (14)

1. a kind of device shown for realizing the nearly eye of 3-D image, characterized by comprising:
Light field reproduction unit is configured to the field information of reconstruct target object to reproduce virtual scene;And
Virtual reality fusion unit is configured as output to the 3-D image for being fused together the virtual scene and true scene.
2. device as described in claim 1, wherein further comprise projecting cell, be configured to light field reproduction unit is defeated Virtual scene out is sent to virtual reality fusion unit.
3. device as described in claim 1, wherein the light field reproduction unit includes:
At least one spatial light modulator;And
The microlens array being arranged on the light direction of the spatial light modulator,
Wherein, the spatial light modulator is divided into multiple sub-image areas, by making from each sub-image regions Refraction of the light in domain through each lenticule unit of the microlens array and load spatial information.
4. device as described in claim 1, wherein the light field reproduction unit includes:
Multiple spatial light modulators, are spliced together;And
Multiple microlens arrays, each microlens array are set on the light direction of respective associated spatial light modulator,
Wherein, each spatial light modulator is divided into multiple sub-image areas, by making from each subgraph As the light in region loads spatial information through the refraction of each lenticule unit of associated microlens array.
5. device as described in claim 3 or 4, wherein the spatial light modulator is one of the following: DLP display screen, LCOS display screen or liquid crystal display.
6. device as described in claim 3 or 4, wherein the microlens array uses the form of arc-shaped electrode micro- to realize The focusing function of lens unit.
7. device as described in claim 3 or 4, wherein the range of the lenticule unit size of the microlens array is 0.01mm-10mm。
8. device as described in claim 3 or 4, wherein the light field reproduction unit further comprise be fitted in it is described micro- The Fresnel Lenses of the wherein side of lens array.
9. device as described in claim 1, wherein the virtual reality fusion unit includes waveguide, be set to inside waveguide One nanometer grating and the second nanometer grating, wherein first nanometer grating makes the light entered that diffraction occur, and the waveguide makes It is totally reflected by the light of the first nanometer grating diffraction, second nanometer grating makes the light of total reflection that diffraction occur with by light Visible area is guided into from waveguide.
10. device as described in claim 1, wherein the virtual reality fusion unit includes prism, waveguide and is set in waveguide The nanometer grating in portion, wherein the prism makes incident ray be refracted into waveguide, and the waveguide makes the light total reflection of refraction, The nanometer grating makes the light of total reflection that diffraction occur to guide light into visible area from waveguide.
11. device as described in claim 1, wherein the virtual reality fusion unit includes prism, waveguide and is set in waveguide A pair of of partially reflecting mirror in portion, wherein the prism makes incident ray be refracted into waveguide, and the waveguide keeps the light of refraction complete Reflection, the partially reflecting mirror make the light of total reflection that diffraction occur to guide light into visible area from waveguide.
12. device as described in claim 1, wherein the virtual reality fusion unit includes half-reflection and half-transmission prism with by incident light Line guides visible area into.
13. device as described in claim 1, wherein the virtual reality fusion unit includes free-form curved mirror with by incident ray Guide visible area into.
14. the device as described in any one of claim 9-11, wherein it is first that the refractive index of the waveguide is greater than incident ray The refractive index of the medium of preceding process.
CN201820848042.4U 2018-05-25 2018-05-25 The device shown for realizing the nearly eye of 3-D image Active CN208547775U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201820848042.4U CN208547775U (en) 2018-05-25 2018-05-25 The device shown for realizing the nearly eye of 3-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201820848042.4U CN208547775U (en) 2018-05-25 2018-05-25 The device shown for realizing the nearly eye of 3-D image

Publications (1)

Publication Number Publication Date
CN208547775U true CN208547775U (en) 2019-02-26

Family

ID=65420651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201820848042.4U Active CN208547775U (en) 2018-05-25 2018-05-25 The device shown for realizing the nearly eye of 3-D image

Country Status (1)

Country Link
CN (1) CN208547775U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111175975A (en) * 2020-01-16 2020-05-19 华东交通大学 Near-to-eye display device for realizing large focal depth imaging

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111175975A (en) * 2020-01-16 2020-05-19 华东交通大学 Near-to-eye display device for realizing large focal depth imaging

Similar Documents

Publication Publication Date Title
JP6965330B2 (en) Wearable 3D augmented reality display with variable focus and / or object recognition
KR100947366B1 (en) 3D image display method and system thereof
WO2018076661A1 (en) Three-dimensional display apparatus
US20050179868A1 (en) Three-dimensional display using variable focusing lens
CN107247333B (en) Display system capable of switching display modes
CN102123291B (en) Intelligent naked-eye stereoscopic display system and control method thereof
KR101441785B1 (en) A 3-dimensional imaging system based on a stereo hologram
CN107367845A (en) Display system and display methods
WO2015043098A1 (en) Multi-viewing angle naked-eye three-dimensional display system and display method therefor
KR20160120757A (en) Autostereoscopic 3d display device using holographic optical elements
CN104407440A (en) Holographic display device with sight tracking function
JP2016500829A (en) True 3D display with convergence angle slice
CN208805627U (en) The device shown for realizing the nearly eye of 3-D image
CN102520527A (en) Naked eye stereo display system and method
CN102768406B (en) Space partition type naked eye three-dimensional (3D) display
CN102376207B (en) LED three-dimensional display screen and manufacturing method thereof, display system and method
CN208547775U (en) The device shown for realizing the nearly eye of 3-D image
CN110531525A (en) The device shown for realizing the nearly eye of 3-D image
JP2002072135A (en) Three-dimensional image displaying system which serves both as regeneration of ray of light and multieye- parallax of shadow picture-type
CN110908133A (en) Integrated imaging 3D display device based on dihedral corner reflector array
CN112335237A (en) Stereoscopic display system and method for displaying three-dimensional image
JP3756481B2 (en) 3D display device
CN110531524A (en) The device shown for realizing the nearly eye of 3-D image
CN115236872A (en) Three-dimensional display system of pixel level accuse light
KR101093929B1 (en) Method and system for displaying 3-dimensional images using depth map

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 215123 No. 68, Xinchang Road, Suzhou Industrial Park, Jiangsu, China

Co-patentee after: Suzhou University

Patentee after: SUZHOU SUDAVIG SCIENCE AND TECHNOLOGY GROUP Co.,Ltd.

Address before: 215123 No. 68, Xinchang Road, Suzhou Industrial Park, Jiangsu, China

Co-patentee before: Suzhou University

Patentee before: SVG OPTRONICS, Co.,Ltd.