WO2022088887A1 - 镜片及智能眼镜 - Google Patents

镜片及智能眼镜 Download PDF

Info

Publication number
WO2022088887A1
WO2022088887A1 PCT/CN2021/114458 CN2021114458W WO2022088887A1 WO 2022088887 A1 WO2022088887 A1 WO 2022088887A1 CN 2021114458 W CN2021114458 W CN 2021114458W WO 2022088887 A1 WO2022088887 A1 WO 2022088887A1
Authority
WO
WIPO (PCT)
Prior art keywords
mirror surface
lens
curvature
radius
fusion device
Prior art date
Application number
PCT/CN2021/114458
Other languages
English (en)
French (fr)
Inventor
王玘
谢振霖
朱帅帅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022088887A1 publication Critical patent/WO2022088887A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C11/00Non-optical adjuncts; Attachment thereof
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C7/00Optical parts

Definitions

  • the present application belongs to the technical field of near-eye display, and in particular relates to a lens and smart glasses.
  • Augmented reality technology, virtual reality technology and mixed reality technology have been widely used in many industries.
  • the application scenarios of the above technologies all involve near-eye display, and when the above-mentioned technologies are applied to near-eye display scenarios, it is necessary to consider that the user's vision has refractive errors such as myopia, hyperopia, and astigmatism.
  • the equipment using the above technology is usually solved by the following technical means: the first is to set up multiple functional areas on the lens of the equipment to take into account myopia or hyperopia and virtual images. projection.
  • the disadvantage is that the functional area of the lens is relatively complex, which is easy to cause distortion of the virtual image or the real image, and the production process is complicated and the production cost is high.
  • the second is to set up an electronically controlled optical phase modulation module in the device to ensure that users with refractive errors can also see a clear virtual image.
  • the disadvantage is that the device system is complex, the power consumption is high, and its configuration is difficult.
  • the third is to add corrective lenses in the device to see the virtual image and the real scene at the same time. The disadvantage is that it is equivalent to wearing two glasses, which is heavy and has poor user experience. good.
  • the purpose of the embodiments of the present application is to provide a lens and smart glasses to solve at least one of the deficiencies of the above technical means.
  • a first aspect a lens is provided, including a lens body and an optical fusion device;
  • the lens main body includes an inner mirror surface and an outer mirror surface, wherein the surface parameters of the inner mirror surface are fixed, the surface parameters of the outer mirror surface are determined according to the surface parameters of the inner mirror surface, and the surface parameters of the inner mirror surface and the outer mirror surface are determined in combination The diopter of the lens body;
  • the optical fusion device is arranged on the inner mirror surface of the lens body, or, the optical fusion device is arranged in the area between the inner mirror surface and the outer mirror surface, and the optical fusion device is used to reflect the light beam group to the retina of the eyeball to form a display image.
  • the optical fusion device when working, can reflect the light beam group emitted from the external device to the retina of the eyeball, so as to form a display image through the human eye.
  • the surface parameters of the inner mirror surface of the lens body are fixed, and the surface parameters of the outer mirror surface of the lens body are determined according to the surface parameters of the inner mirror surface.
  • the combination of the surface parameters of the inner mirror surface and the outer mirror surface determines the diopter of the lens body.
  • the combination of the surface parameters of the inner mirror surface and the outer mirror surface can make the lens have the function of correcting myopia, correcting hyperopia, correcting amblyopia, correcting astigmatism or flat light.
  • the parameters such as the setting position of the optical fusion device can only be determined according to the inner mirror surface. In this way, within a certain range of myopia, hyperopia, amblyopia or astigmatism, the parameters such as the setting position of the optical fusion device do not need to be accompanied by the degree. This can better reduce the application cost of lenses, simplify the sample preparation specifications of lenses during manufacturing, make it easy to mass-produce, and make it easier for smart glasses with lenses to form a minimalist structure , and the lens is easier to integrate into the conventional spectacle frame when it is applied in production, and has better production potential.
  • the surface type parameter of the inner mirror surface and the surface type parameter of the outer mirror surface are both curvature radius values.
  • the curvature radius value is used as the processing parameter, and the processing of the inner mirror surface and the outer mirror surface of the lens can be more conveniently realized.
  • the value of the radius of curvature of the inner mirror surface is fixed, the value of the radius of curvature of the outer mirror surface is determined according to the value of the radius of curvature of the inner mirror surface, and the combination of the value of the radius of curvature of the inner mirror surface and the value of the radius of curvature of the outer mirror surface determines the diopter of the lens body.
  • the radius of curvature value of the outer mirror surface has a corresponding numerical range, and each radius of curvature value of the outer mirror surface in this numerical range is determined with the combination of the radius of curvature value of the inner mirror surface respectively. corresponding diopter.
  • the value of the radius of curvature of the inner mirror surface is 62.00mm
  • the value of the radius of curvature of the outer mirror surface is in the range of 191.01mm to 361.53mm
  • the value of the diopter is in the range of -8.00D to -6.50D
  • the virtual image distance of the displayed image is 0.13m.
  • the radius of curvature of the inner mirror surface is 82.00mm
  • the value of the radius of curvature of the outer mirror surface is in the range of 182.48mm to 566.50mm
  • the value of the diopter is in the range of -6.25D to -4.00D
  • the virtual image distance of the displayed image is 0.16m.
  • the value of the radius of curvature of the inner mirror surface is 118.00mm
  • the value of the radius of curvature of the outer mirror surface is in the range of 195.71mm to 328.94mm
  • the value of the diopter is in the range of -3.25D to -2.00D
  • the virtual image distance of the displayed image is 0.27m.
  • the value of the radius of curvature of the inner mirror surface is 255.00mm
  • the value of the radius of curvature of the outer mirror surface is in the range of 255.75mm to 997.23mm
  • the value of the diopter is in the range of -1.75D to 0.00D
  • the virtual image distance of the displayed image is 0.57 m.
  • the optical fusion device is in the shape of a film, and is attached to the inner mirror surface.
  • the light combiner can be a flat or curved holographic reflection light combiner.
  • the optical fusion device is arranged in the area between the inner mirror surface and the outer mirror surface, and is integrally formed with the lens body. In this way, the manufacturing cost of the lens can be reduced, and the mass production of the lens can be facilitated.
  • the main body of the lens includes a first substrate and a second substrate
  • the optical fusion device is arranged between the first substrate and the second substrate
  • the side of the first substrate facing away from the optical fusion device is an outer mirror surface
  • the first substrate is an outer mirror surface.
  • the side of the two substrates facing away from the optical fusion device is an inner mirror surface.
  • the second aspect provide a kind of smart glasses, including a frame, a light engine and the above-mentioned lens, the lens is arranged in the frame of the frame, the light engine is arranged on the temple of the frame, and is used to emit a beam group to the optical fusion device .
  • the smart glasses provided by the embodiments of the present application include the above-mentioned lenses, and the lenses achieve lower application costs and more convenience through the combination of the surface parameters of the inner and outer lens bodies and the cooperation between the optical fusion device and the lens body.
  • the standardized sample preparation specifications also make the smart glasses including the above-mentioned lenses have lower production costs and stronger product power.
  • the light engine includes a light source module, a shaping module and a microelectronic scanning galvanometer;
  • the microelectronic scanning galvanometer is set close to the lens, and the shaping module is located between the microelectronic scanning galvanometer and the light source module;
  • the light source module is used to emit the beam group
  • the shaping module is used to shape the beam group emitted from the light source module
  • the microelectronic scanning galvanometer is used to scan and project the shaped beam group to the optical fusion device.
  • the light engine When the light engine is working, its light source module emits a beam group to the shaping module, and the shaping module shapes each beam in the beam group to form beams with different divergence angles. After shaping, the beams with different divergence angles pass through.
  • the microelectronic scanning galvanometer scans and reflects to the optical fusion device, and then the optical fusion device reflects it to the retina.
  • the smart glasses further include a relay optical object, which is arranged on the path of the microelectronic scanning galvanometer projecting the beam group to the optical fusion device, and is used for shaping the beam group projected to the optical fusion device.
  • a relay optical object which is arranged on the path of the microelectronic scanning galvanometer projecting the beam group to the optical fusion device, and is used for shaping the beam group projected to the optical fusion device.
  • the beam group is a collimated beam group, a diverging beam group or a convergent beam group.
  • the smart glasses are augmented reality glasses or mixed reality glasses.
  • FIG. 1 is a cross-sectional view 1 of a lens provided by an embodiment of the present application.
  • FIG. 2 is a second cross-sectional view of a lens provided by an embodiment of the application.
  • FIG. 3 is a cross-sectional view 3 of a lens provided by an embodiment of the present application.
  • FIG. 4 is a cross-sectional view 4 of a lens provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of smart glasses provided by an embodiment of the present application.
  • FIG. 6 is another schematic structural diagram of the smart glasses provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of the cooperation of a lens, a light engine, and a relay optical object according to an embodiment of the present application;
  • FIG. 8 is a schematic diagram of the cooperation of a lens, a light engine, a relay optical object and a display controller according to an embodiment of the present application.
  • Augmented reality technology (AR, Augmented Reality) is a technology that ingeniously integrates virtual information with the real world.
  • Computer-generated text, images, 3D models, music, videos and other virtual information are simulated and applied to the real world, and the two kinds of information complement each other, thereby realizing the "enhancement" of the real world.
  • MR Mixed reality
  • MR Mixed Reality
  • This technology presents virtual scene information in the real scene, and sets up an interactive feedback system between the real world, the virtual world and the user. Information loops to enhance the realism of the user experience.
  • Virtual image distance In an augmented reality or mixed reality device, the sum of the distance from the human eye to the lens and the distance from the lens to the virtual image formed by the augmented reality or mixed reality device is called the virtual image distance.
  • Micro-electromechanical scanning galvanometer (MEMS, Micro-Electro-Mechanical System), also known as micro-system, micro-machine, etc., is a collection of micro-sensors, micro-actuators, micro-mechanical structures, micro-power micro-energy sources, signal processing and control circuits , High-performance electronic integrated devices, interfaces, and communications are equal to one-piece micro-devices or systems.
  • an embodiment of the present application provides a lens 10 .
  • the lens 10 includes a lens body 11 and an optical fusion device 12 .
  • the lens body 11 includes an inner mirror surface 13 and an outer mirror surface 14, wherein the inner mirror surface 13 is the surface facing the wearer's face, and the outer mirror surface 14 is the surface facing away from the wearer's face.
  • the surface shape parameters of the inner mirror surface 13 are fixed, while the surface shape parameters of the outer mirror surface 14 are valued according to the surface shape parameters of the inner mirror surface 13 of the lens body 11 .
  • the surface shape parameters of the inner mirror surface 13 and the surface shape parameters of the outer mirror surface 14 are combined to determine the diopter of the lens body 11 .
  • the combination of the surface parameters of the inner mirror surface 13 and the surface parameters of the outer mirror surface 14 to determine the diopter of the lens body 11 is realized by the following diopter calculation formula:
  • represents the diopter
  • ra represents the curvature radius of the outer mirror surface 14
  • rb represents the curvature radius of the inner mirror surface 13
  • n represents the material refractive index of the lens body 11
  • d is the center thickness of the lens body 11 .
  • the surface shapes of the inner mirror surface 13 and the outer mirror surface 14 of the lens 10 can be ordinary spherical surfaces, aspherical surfaces or free-form surfaces, etc., so that the processing of the inner mirror surface 13 and the outer mirror surface 14 can be compared with that of ordinary glasses 10. No difference, the polarizing lens 10 can be processed according to the mature processing flow, so that the overall processing and manufacturing cost of the lens 10 can be reduced.
  • the optical combiner 12 is disposed on the inner mirror surface 13 of the lens body 11 .
  • the optical fusion device 12 is arranged in the area between the inner mirror surface 13 and the outer mirror surface 14 (this area is shown as 10a in FIG. 3 and FIG. 4 ), and the optical fusion device 12 is used to reflect the light beam group to the retina of the eyeball , to form the display image.
  • the light beam group is emitted by an external light beam emitting device (such as the light engine 22 ).
  • the optical fusion device 12 can reflect the light beam group to the retina of the human eye, and after the retina is imaged, the user can form a virtual display image in the human brain. With additional regular vision correction glasses, you can see the external environment and virtual display images clearly.
  • the lens 10 provided by the embodiment of the present application when working, the optical fusion device 12 can reflect the light beam group emitted from the external device to the retina of the eyeball, to form a display image by the human eye.
  • the surface parameters of the inner mirror surface 13 of the lens body 11 are fixed, the surface parameters of the outer mirror surface 14 of the lens body 11 are determined according to the surface parameters of the inner mirror surface 13 , the surface parameters of the inner mirror surface 13 and the surface parameters of the outer mirror surface 14 are determined.
  • the combination determines the diopter of the lens body, and then the combination of the surface parameters of the inner mirror surface 13 and the outer mirror surface 14 can make the lens 10 have myopia correction, hyperopia correction, amblyopia correction, astigmatism correction or flat light function.
  • the parameters such as the setting position of the optical fusion device 12 can be determined only according to the surface parameters of the inner mirror surface 13, without considering the influence brought by the change of the surface parameters of the outer mirror surface 14, so that in a certain range of myopia, hyperopia, amblyopia or astigmatism, the optical fusion device
  • the parameters such as the setting position of the lens 12 do not need to be changed with the change of the power, which better reduces the application cost of the lens 10, simplifies the sample preparation specifications of the lens 10 during manufacturing, and makes it easy for mass production.
  • the smart glasses 20 with the lenses 10 can easily form a minimalist structure, and the lenses 10 can be more easily integrated into the conventional spectacle frames 21 during commercial application, and have better commercialization potential.
  • the surface shape parameters of the inner mirror surface 13 and the surface shape parameters of the outer mirror surface 14 are both curvature radius values.
  • the value of the radius of curvature of the inner mirror surface 13 is fixed, the value of the radius of curvature of the outer mirror surface 14 is determined according to the value of the radius of curvature of the inner mirror surface 13, and the combination of the value of the radius of curvature of the inner mirror surface 13 and the value of the radius of curvature of the outer mirror surface 14 determines the Diopter.
  • the surface parameter as the curvature radius value
  • the curvature radius value is used as the processing parameter, and the processing of the inner mirror surface 13 and the outer mirror surface 14 of the lens 10 can be conveniently realized.
  • the surface shape parameters may also be other parameters of the inner mirror surface 13 and the outer mirror surface 14 , which are not limited in this embodiment.
  • the value of the radius of curvature of the outer mirror surface 14 has a corresponding numerical range, and each value of the radius of curvature of the outer mirror surface 14 within the numerical range is respectively equal to The combination of the curvature radius values of the inner mirror surface 13 determines the corresponding diopter.
  • the value of the surface parameter of the inner mirror surface 13 of the lens body 11 can be set to N, which corresponds to the inner surface 13 of the lens body 11.
  • the value of the surface type parameter of the outer mirror surface 14 has a corresponding numerical range.
  • Table 1 lists the diopter of the lens 10 corresponding to the radius of curvature of the inner and outer mirror surfaces 14 of the lens 10 with a central thickness of 2 mm, the material is MR-8, the refractive index is 1.597, the Abbe coefficient is 40, and the radius of curvature of the inner and outer mirror surfaces 14 and the displayed image The relationship of the virtual image distance.
  • represents the diopter
  • ra represents the radius of curvature of the outer mirror surface 14 of the lens
  • rb represents the radius of curvature of the inner mirror surface 13 of the lens 10
  • the unit of diopter is D
  • the virtual image distance is: after the display image is formed on the retina, it is in front of the human eyes. The distance between the virtual image formed by the square and the retina.
  • rb 255mm
  • rb 118mm
  • rb 82mm
  • the virtual image distance of the displayed image is 0.13m.
  • the value range of the radius of curvature ra of the outer mirror surface 14 is 182.48mm to 566.50mm, and the value of the diopter ranges from -6.25D to -4.00D.
  • the virtual image distance is 0.16m.
  • the value range of the radius of curvature ra of the outer mirror surface 14 is 195.71mm to 328.94mm, and the value of the diopter ranges from -3.25D to -2.00D, displaying a virtual image of the image.
  • the distance is 0.27m.
  • the value range of the radius of curvature ra of the outer mirror surface 14 is 255.75mm to 997.23mm
  • the value of the diopter is 0.00D to -1.75D
  • the virtual image distance of the displayed image is displayed. is 0.57m.
  • the reciprocal of the absolute value of the diopter is the far point distance of the human eye, that is, the farthest distance that the human eye can see clearly.
  • the diopter is -1.00D
  • the far point distance of the human eye is 1m
  • the diopter is -0.50D
  • the degree of myopia is the absolute value of the diopter multiplied by 100.
  • each inner mirror surface 13 corresponds to a number of values of the radius of curvature of the outer mirror surface 14, and because the optical fusion device 12 is disposed on the inner mirror surface 13 of the lens 10 or the inner mirror surface 13 and the outer mirror surface 14.
  • the virtual image distance of the displayed image can be designed to be fixed, which also means that in the curvature radius value of the inner mirror surface 13, the light fusion Specification parameters such as the setting position of the lens 12 do not need to be changed with the change of the degree of myopia, which reduces the application cost of the lens 10, simplifies the sample preparation specifications of the lens 10 during manufacturing, and makes it easy for mass production.
  • the diopter is -2, and the far point distance of the human eye is about 0.50m.
  • the diopter is -3.75, and the distance of the far point of the human eye is about 0.27m, then the diopter is -2 ⁇ -3.75.
  • the distance of the far point of the human eye is about 0.27m-0.50m, then when preparing the lens 10, the virtual image distance of the displayed image can be uniformly designed to be 0.27m, so as to satisfy the average distance of myopic users of 200 degrees to 375 degrees.
  • the displayed image can be seen clearly.
  • the virtual image distance of the displayed image is the farthest distance that the human eye can see clearly in the range of 200 degrees to 375 degrees, so that the virtual image distance is in this range.
  • the gear position can be fixed.
  • there is no need to adjust the specification parameters such as the setting position of the optical fusion device 12 which simplifies the sample preparation specifications of the lens 10 during production and makes it easy for mass production.
  • the user actually uses the device with the above-mentioned lens 10, his human eyes can clearly see the displayed image and the external real ring mirror at the same time more comfortably.
  • the displayed image is clear, the external real ring mirror is blurred, or when the external real ring mirror is clearly seen, the displayed image is blurred.
  • the optical fusion device 12 is in the shape of a film, and is attached to the inner mirror surface 13 .
  • the optical fusion device 12 can be a flat or curved holographic reflection optical fusion device 12, and the holographic reflection optical fusion device 12 is a layer of transparent polymer film (such as a methyl methacrylate film), which is only suitable for a specific angle.
  • the incident light beam of a specific wavelength acts as a reflection, which can be equivalent to a concave mirror in principle, and has a high transmittance to ambient light.
  • the optical fusion device 12 can effectively reflect the light beam group emitted by the light engine 22 on the one hand, so as to form a display image on the retina, and on the other hand, it can also enable the user's glasses to clearly observe through the optical fusion device 12 external environment.
  • the optical fusion device 12 By setting the optical fusion device 12 in a film shape, it can be attached to the inner mirror surface 13 of the lens 10 , thereby realizing simple and reliable combination with the lens 10 and easy production. As shown in FIG. 2 , when the optical fusion device 12 is attached to the inner mirror surface 13 of the lens 10 , a hard film 121 or a flat lens 10 can also be covered on the optical fusion device 12 , so that the optical fusion device 12 can be connected to the optical fusion device 12 . Effective protection to prevent it from being scratched by particles such as flying sand from the outside world.
  • the holographic reflection light fusion device 12 can be prepared by means of experimental holography: specifically, the laser beam emitted by the strong coherent laser light source is divided into two paths, one path is used as a reference light, and the wave is obtained after being reflected by a solid concave mirror. or directly generate the reflected wavefront through the wavefront generator, and then interfere with another laser beam to generate an interference pattern, and expose the interference pattern to the photosensitive holographic film to obtain a transparent mirror equivalent to a concave mirror. The polymer wavefront finally forms the holographic reflection light fusion device 12 .
  • the optical fusion device 12 may also have a single holographic functional layer, and the light engine 22 emits at least one group of light beams onto the holographic functional layer, so as to reflect on the retina through the holographic functional layer. form a display image.
  • the optical fusion device 12 may also have multiple holographic functional layers that reflect light beams of different wavelengths. Multiple display images are formed on the retina.
  • the optical fusion device 12 may be disposed in the area between the inner mirror surface 13 and the outer mirror surface 14 of the lens body 11 , and be integrally formed with the lens body 11 .
  • the optical fusion device 12 can be pre-installed on the corresponding lens body 11 in the preparation mold of the lens body 11 during the manufacturing process of the lens body 10 .
  • the position of the area between the inner mirror surface 13 and the outer mirror surface 14 is integrally formed with the lens body 11 in the process of pouring the resin liquid into the mold.
  • the manufacturing cost of the lens 10 can be reduced, and the mass production of the lens 10 can be facilitated.
  • the optical fusion device 12 is arranged in the area between the inner mirror surface 13 and the outer mirror surface 14 of the lens body 11 , it can be protected by the lens body 11 and avoid contact with the external environment, so that the optical fusion device 12 can be prevented from being damaged. It is scratched by particles such as flying sand from the outside.
  • the lens body 11 includes a first substrate 15 and a second base plate 11 .
  • the substrate 16, the optical fusion device 12 is sealed between the first substrate 15 and the second substrate 16, the side of the first substrate 15 facing away from the optical fusion device 12 is the outer mirror surface 14, and the second substrate 16 is backed
  • the side facing the optical fusion device 12 is the inner mirror surface 13 .
  • the first substrate 15 and the second substrate 16 can be produced first, and then the optical fusion device 12 can be arranged between the first substrate 15 and the second substrate 16, and It is cured between the first substrate 15 and the second substrate 16 by resin.
  • the above-mentioned combination method of the optical fusion device 12 and the lens 10 can be used to realize the integration of the optical fusion device 12. It is arranged in the lens 10 made of glass.
  • the embodiment of the present application further provides a kind of smart glasses 20, including a frame 21, a light engine 22 and the above-mentioned lens 10, the lens 10 is arranged in the frame 23 of the frame 21, and the light engine 22 is arranged on the temple 24 of the mirror frame 21 and is used for emitting a beam group to the optical fusion device 12 .
  • the smart glasses 20 may be augmented reality glasses or mixed reality glasses or the like.
  • the number of lenses 10 may be two, and the two lenses 10 may be embedded in the two notches of the frame 23 respectively.
  • the two lenses 10 may both have the optical fusion device 12 , or a single lens 10 may have the optical fusion device 12 .
  • the number of the optical fusion devices 12 is the same as the number of the light engines 22 , and the optical fusion device 12 may occupy the entire lens 10 .
  • the smart glasses 20 provided by the embodiments of the present application include the above-mentioned lenses 10, and the lenses 10 achieve a lower level by combining the surface parameters of the inner and outer lens bodies 11 and the cooperation between the optical fusion device 12 and the lens body 11. Therefore, the smart glasses 20 including the above-mentioned lenses 10 have lower production costs and stronger product power.
  • the light engine 22 includes a light source module, a shaping module and a microelectronic scanning galvanometer.
  • the microelectronic scanning galvanometer is arranged close to the lens 10, and the shaping module is located between the microelectronic scanning galvanometer and the light source module.
  • the light source module is used to emit a beam group
  • the shaping module is used to shape the beam group emitted from the light source module.
  • the microelectronic scanning galvanometer is a scanning galvanometer based on a microelectromechanical system, which is used The shaped beam group is scanned and projected to the optical fusion device 12 .
  • the light engine 22 When the light engine 22 is working, its light source module emits a beam group to the shaping module, and the shaping module shapes each beam in the beam group to form beams with different divergence angles. After shaping, beams with different divergence angles are formed. It is scanned and reflected by the microelectronic scanning galvanometer to the optical fusion device 12 , and then reflected to the retina by the optical fusion device 12 .
  • the shaping module may be an adjustable-focus beam shaping component, which may specifically be various components such as a liquid lens, a micromechanical electronic mirror array, and a variable focus lens group.
  • the light source module can be a laser light source. Using a laser light source can reduce the power consumption of the light engine 22 and improve the contrast of the displayed image. Compared with the use of LCOS (Liquid Crystal on Silicon), LED (Light Emitting Diode, Light-emitting diode) or OLED (Organic Light-Emitting Diode, organic light-emitting diode) light source, can save the lighting components and collimating lens group, so as to achieve a minimalist architecture.
  • LCOS Liquid Crystal on Silicon
  • LED Light Emitting Diode, Light-emitting diode
  • OLED Organic Light-Emitting Diode, organic light-emitting diode
  • the beam group is a laser beam group
  • the light source module may be a laser generator, which includes at least one laser chip, and a single laser chip is used to generate a laser beam of one wavelength.
  • each laser chip can achieve a one-to-one correspondence with multiple shaping modules, and each shaping module can time-division and shape the corresponding laser beam into a laser beam having at least two divergence angles. laser beam.
  • the laser beams with different divergence angles undergo time-division shaping processing by the shaping module, and can be reflected to the retina through the multi-layer holographic functional layer to form multiple display images.
  • the laser chip can be at least three-color chips of red, green, and blue (RGB, Red, Green, Blue), and the three-color chip can be an integrated chip integrating red, green and blue, or can be composed of three single-color chips. Combined, the laser chip adopts red, green and blue three-color mode, and various colors are obtained by changing and superimposing the above three-color channels.
  • RGB Red, Green, Blue
  • the three-color chip can be an integrated chip integrating red, green and blue, or can be composed of three single-color chips.
  • the laser chip adopts red, green and blue three-color mode, and various colors are obtained by changing and superimposing the above three-color channels.
  • the microelectronic scanning galvanometer can be a piezoelectrically driven two-dimensional scanning galvanometer, which drives the mirror to twist through two mutually perpendicular torsion beams, so as to realize high-pass two-dimensional scanning reflection of the laser beam, and the microelectronics
  • the scanning galvanometer can also be formed by combining two one-dimensional scanning galvanometers, and realizes high-pass two-dimensional scanning reflection of the laser beam.
  • the smart glasses 20 further include a relay optical object 25 , and the relay optical object 25 is arranged on the microelectronic scanning galvanometer to project a beam group to the optical fusion device 12 On the path of the beam, it is used to realize relay shaping for the beam group projected to the optical fusion device 12 .
  • the relay optics 25 can correct and shape the beam group projected to the optical fusion device 12 through the microelectronic scanning galvanometer, and can also time-division and shape each beam of the beam group into beams with at least two divergence angles.
  • the relay optical component may be an optical lens group or a binary optical element, and the relay optical component may be an integral part of the light engine 22, or may exist independently of the light engine 22.
  • the relay optical assembly is drawn outside the light engine 22 .
  • the beam group projected to the optical combiner 12 is a collimated beam group, a diverging beam group or a convergent beam group.
  • the beam group may be any one of a collimated beam group, a diverging beam group, or a convergent beam group.
  • the beam group may be specifically selected as a collimated beam group.
  • the smart glasses 20 may further include a display controller 26 .
  • the display controller 26 is electrically connected to the light engine 22 and is used to provide feedback to the light source module, the shaping module and the microcomputer.
  • the galvanometer sends configuration information for the displayed image.
  • the display controller 26 can decode and render the display image to generate configuration information, and the light engine 22 can modulate the display image according to the configuration information.
  • the display controller 26 can also be electrically connected with external devices, and the electrical connection includes wired or wireless connection, and the wireless connection includes WIFI connection or Bluetooth connection, etc. In this way, the display controller 26 can obtain relevant information from other external devices to The configuration information is enriched, thereby providing more accurate configuration information for the light engine 22 .

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Optics & Photonics (AREA)
  • General Health & Medical Sciences (AREA)

Abstract

一种镜片(10)及智能眼镜(20),镜片(10)包括镜片主体(11)和光融合器(12);镜片主体(11)包括内侧镜面(13)和外侧镜面(14),内侧镜面(13)面型参数固定,外侧镜面(14)面型参数根据内侧镜面(13)面型参数取值,内侧镜面(13)面型参数和外侧镜面(14)面型参数组合确定镜片主体(11)的屈光度;光融合器(12)设置于内侧镜面(13),或者,光融合器(12)设置于内侧镜面(13)和外侧镜面(14)之间的区域内,光融合器(12)用于反射光束组。光融合器(12)的设置位置等参数仅根据内侧镜面(13)的面型参数来决定,使得光融合器(12)的设置位置等参数无需随同度数的变化而做出改变,降低了镜片(10)的应用成本,简化了镜片(10)在生产制造时的备样规格,易于批量化生产,也使得具有镜片(10)的智能眼镜(20)易于形成极简架构。

Description

镜片及智能眼镜
本申请要求于2020年10月29日提交国家知识产权局、申请号为202022459665.2、申请名称为“镜片及智能眼镜”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于近眼显示技术领域,尤其涉及一种镜片及智能眼镜。
背景技术
增强现实技术、虚拟现实技术和混合现实技术在众多行业领域得到了广泛应用。上述技术的应用场景均会涉及到近眼显示,而上述技术在应用于近眼显示场景时,需要考虑到用户视力存在近视、远视以及散光等屈光不正的情形。
现有技术中,为应对屈光不正的情形,采用上述技术的设备通常会通过以下几种技术手段来解决:第一种是在设备的镜片设置多个功能片区,以兼顾近视或远视以及虚像投射。其存在的不足在于,镜片功能片区较为复杂,容易造成虚像或真实像变形,且生产工艺复杂,生产成本偏高。第二种是在设备内设置电控光学相位调制模组,以保证屈光不正的用户也能看到清晰的虚像,其存在的不足在于,设备系统较为复杂,功耗高,其构型难以实现趋近于常规的眼镜造型,第三种是在设备内添加矫正镜片,以同时看清虚像和真实景象,其存在的不足在于,等效于佩戴两幅眼镜,重量大,用户体验感不佳。
发明内容
本申请实施例的目的在于提供一种镜片及智能眼镜,旨在解决上述技术手段的至少一种不足之处。
为实现上述目的,本申请实施例采用的技术方案是:
第一方面:提供一种镜片,包括镜片主体和光融合器;
镜片主体包括内侧镜面和外侧镜面,其中,内侧镜面的面型参数固定,外侧镜面的面型参数根据内侧镜面的面型参数取值,内侧镜面的面型参数和外侧镜面的面型参数组合确定镜片主体的屈光度;
光融合器设置于镜片主体的内侧镜面,或者,光融合器设置于内侧镜面和外侧镜面之间的区域内,光融合器用于将光束组反射至眼球的视网膜,以形成显示图像。
本申请实施例提供的镜片,工作时,光融合器可以将自外界设备发射的光束组反射至眼球的视网膜上,以通过人眼形成显示图像。镜片主体的内侧镜面的面型参数固定,镜片主体的外侧镜面的面型参数根据内侧镜面的面型参数来决定,内侧镜面的面型参数和外侧镜面的面型参数组合确定镜片主体的屈光度,进而内侧镜面和外侧镜面的面型参数的组合可使得镜片具有近视矫正、远视矫正、弱视矫正、散光矫正或平光功能。由于内侧镜面的面型参数固定不变,那么当光融合器设置于内侧镜面上,或是内侧镜面和外侧镜面之间的区域中时,光融合器的设置位置等参数即可仅根据内侧镜 面的面型参数来决定,无需考虑外侧镜面的面型参数改变所带来的影响,这样可使得在某一近视、远视、弱视或散光度数范围内,光融合器的设置位置等参数无需随同度数的变化而做出改变,如此便更佳地降低了镜片的应用成本,简化了镜片在生产制造时的备样规格,使其易于批量化生产,也使得具有镜片的智能眼镜易于形成极简架构,且镜片在产品化应用时,更易于整合于常规的眼镜镜架,具有更佳地产品化潜力。
可选地,内侧镜面的面型参数和外侧镜面的面型参数均为曲率半径值。通过将面型参数选定为曲率半径值,这样以曲率半径值作为加工参数,可较为便利地实现对镜片的内侧镜面和外侧镜面的加工。其中,内侧镜面的曲率半径值固定,外侧镜面的曲率半径值根据内侧镜面的曲率半径值取值,内侧镜面的曲率半径值和外侧镜面的曲率半径值组合确定镜片主体的屈光度。
可选地,对应于内侧镜面的曲率半径值,外侧镜面的曲率半径值具有相应的数值范围,在该数值范围内的外侧镜面的每一曲率半径值分别和内侧镜面的曲率半径值组合确定出相应的屈光度。
可选地,内侧镜面的曲率半径值为62.00mm,外侧镜面的曲率半径值的取值范围为191.01mm至361.53mm,屈光度的数值范围为-8.00D至-6.50D,显示图像的虚像距为0.13m。
可选地,内侧镜面的曲率半径值为82.00mm,外侧镜面的曲率半径值的取值范围为182.48mm至566.50mm,屈光度的数值范围为-6.25D至-4.00D,显示图像的虚像距为0.16m。
可选地,内侧镜面的曲率半径值为118.00mm,外侧镜面的曲率半径值的取值范围为195.71mm至328.94mm,屈光度的数值范围为-3.25D至-2.00D,显示图像的虚像距为0.27m。
可选地,内侧镜面的曲率半径值为255.00mm,外侧镜面的曲率半径值的取值范围为255.75mm至997.23mm,屈光度的数值范围为-1.75D至0.00D,显示图像的虚像距为0.57m。
可选地,光融合器呈膜状,且贴设于内侧镜面。光融合器可以是呈平面或曲面的全息反射光融合器。
可选地,光融合器设置于内侧镜面和外侧镜面之间的区域内,且和镜片主体一体成型。如此可降低镜片的制造成本,也便于镜片的批量制造生产。
可选地,镜片主体包括第一基片和第二基片,光融合器设置于第一基片和第二基片之间,第一基片背向光融合器的一面为外侧镜面,第二基片背向光融合器的一面为内侧镜面。这样可实现将光融合器设置于玻璃材质的镜片中。
第二方面:提供一种智能眼镜,包括镜架、光引擎和上述的镜片,镜片设置于镜架的镜框内,光引擎设置于镜架的镜腿上,并用于向光融合器发射光束组。
本申请实施例提供的智能眼镜,由于包括有上述的镜片,而镜片通过其内外侧镜片主体的面型参数的组合以及光融合器与镜片主体的配合,实现了更低的应用成本和更便利化的备样规格,如此也使得包括有上述的镜片的智能眼镜具有了更低的生产成本和更强的产品力。
可选地,光引擎包括光源模组、整形模组和微电子扫描振镜;
微电子扫描振镜靠近镜片设置,整形模组位于微电子扫描振镜和光源模组之间;
光源模组用于发射光束组;
整形模组用于对自光源模组射出的光束组实现整形;
微电子扫描振镜用于将完成整形的光束组扫描投射至光融合器。
光引擎在工作时,其光源模组向整形模组发射光束组,整形模组则对光束组中各个光束进行整形,形成具有不同发散角的光束,完成整形后,具有不同发散角的光束通过微电子扫描振镜扫描反射至光融合器,再由光融合器反射至视网膜。
可选地,智能眼镜还包括中继光学物件,中继光学物件设置于微电子扫描振镜向光融合器投射光束组的路径上,并用于对投射至光融合器的光束组实现整形。
可选地,光束组为准直光束组、发散光束组或汇聚光束组。
可选地,所述智能眼镜为增强现实眼镜或混合现实眼镜。
附图说明
图1为本申请实施例提供的镜片的横截面剖切视图一;
图2为本申请实施例提供的镜片的横截面剖切视图二;
图3为本申请实施例提供的镜片的横截面剖切视图三;
图4为本申请实施例提供的镜片的横截面剖切视图四;
图5为本申请实施例提供的智能眼镜的结构示意图;
图6为本申请实施例提供的智能眼镜的另一结构示意图;
图7为本申请实施例提供的镜片、光引擎及中继光学物件的配合示意图;
图8为本申请实施例提供的镜片、光引擎、中继光学物件及显示控制器的配合示意图。
其中,图中各附图标记:
10—镜片                 11—镜片主体             12—光融合器
13—内侧镜面             14—外侧镜面             15—第一基片
16—第二基片             20—智能眼镜             21—镜架
22—光引擎               23—镜框                 24—镜腿
25—中继光学物件         26—显示控制器           121—加硬膜。
具体实施方式
以下对本申请实施例中出现的专有名词进行解释说明:
增强现实技术:(AR,Augmented Reality)是一种将虚拟信息与真实世界巧妙融合的技术,广泛运用了多媒体、三维建模、实时跟踪及注册、智能交互、传感等多种技术手段,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,两种信息互为补充,从而实现对真实世界的“增强”。
混合现实技术:(MR,Mixed Reality)基于增强现实技术和虚拟现实技术进一步发展而成,该技术通过在现实场景呈现虚拟场景信息,在现实世界、虚拟世界和用户之间搭起一个交互反馈的信息回路,以增强用户体验的真实感。
虚像距:在增强现实或混合现实设备中,人眼到镜片的距离以及镜片到增强现实或混合现实设备所形成的虚像的距离之和称为虚像距。
微电子机械扫描振镜:(MEMS,Micro-Electro-Mechanical System),也称微系 统、微机械等,是集微传感器、微执行器、微机械结构、微电源微能源、信号处理和控制电路、高性能电子集成器件、接口、通信等于一体的微型器件或系统。
请参考图1~图3,本申请实施例提供了一种镜片10。其中,镜片10包括镜片主体11和光融合器12。具体地,镜片主体11包括内侧镜面13和外侧镜面14,其中,内侧镜面13为面向佩戴者面部的表面,外侧镜面14为背离佩戴者面部的表面。内侧镜面13的面型参数固定,而外侧镜面14的面型参数则根据镜片主体11的内侧镜面13的面型参数取值。内侧镜面13的面型参数和外侧镜面14的面型参数组合确定镜片主体11的屈光度。
本实施例中,内侧镜面13的面型参数和外侧镜面14的面型参数组合确定镜片主体11的屈光度通过以下屈光度计算公式实现:
Figure PCTCN2021114458-appb-000001
其中,Φ表示屈光度,ra表示外侧镜面14的曲率半径,rb表示内侧镜面13的曲率半径,n表示镜片主体11的材料折射率,d为镜片主体11的中心厚度。
可选地,镜片10的内侧镜面13和外侧镜面14的面型均可以是普通球面、非球面或是自由曲面等,这样可使得内侧镜面13和外侧镜面14的加工与普通眼镜的镜片10加工无异,按照成熟的偏振镜片10加工流程加工即可,如此可降低镜片10的整体加工制造成本。
更具体地,光融合器12设置于镜片主体11的内侧镜面13。或者,光融合器12设置于内侧镜面13和外侧镜面14之间的区域内(该区域如图3和图4中的10a所示),光融合器12用于将光束组反射至眼球的视网膜,以形成显示图像。其中,光束组是由外界的光束发射装置(比如光引擎22)所发出。
这样,光融合器12便可将光束组反射于人眼的视网膜,经过视网膜成像后,使得使用者在人脑内形成虚拟的显示图像,如此使用者在佩戴具有上述镜片10的设备时,无需再额外佩戴常规视力矫正眼镜,即可看清外界环境和虚拟的显示图像。
请参考图1,以下对本申请实施例提供的镜片10作进一步说明:本申请实施例提供的镜片10,工作时,光融合器12可以将自外界设备发射的光束组反射至眼球的视网膜上,以通过人眼形成显示图像。镜片主体11的内侧镜面13的面型参数固定,镜片主体11的外侧镜面14的面型参数根据内侧镜面13的面型参数来决定,内侧镜面13的面型参数和外侧镜面14的面型参数组合确定镜片主体的屈光度,进而内侧镜面13和外侧镜面14的面型参数的组合可使得镜片10具有近视矫正、远视矫正、弱视矫正、散光矫正或平光功能。
由于内侧镜面13的面型参数固定不变,那么当光融合器12设置于内侧镜面13上,或是内侧镜面13和外侧镜面14之间的区域中时,光融合器12的设置位置等参数即可仅根据内侧镜面13的面型参数来决定,无需考虑外侧镜面14的面型参数改变所带来的影响,这样可使得在某一近视、远视、弱视或散光度数范围内,光融合器12的设置位置等参数无需随同度数的变化而做出改变,如此便更佳地降低了镜片10的应用成本,简化了镜片10在生产制造时的备样规格,使其易于批量化生产,也使得具有 镜片10的智能眼镜20易于形成极简架构,且镜片10在产品化应用时,更易于整合于常规的眼镜镜架21,具有更佳地产品化潜力。
在本申请的另一些实施例中,内侧镜面13的面型参数和外侧镜面14的面型参数均为曲率半径值。其中,内侧镜面13的曲率半径值固定,外侧镜面14的曲率半径值根据内侧镜面13的曲率半径值取值,内侧镜面13的曲率半径值和外侧镜面14的曲率半径值组合确定镜片主体11的屈光度。具体地,通过将面型参数选定为曲率半径值,这样以曲率半径值作为加工参数,可较为便利地实现对镜片10的内侧镜面13和外侧镜面14的加工。当然,面型参数也可以为内侧镜面13和外侧镜面14的其他参数,本实施例对此不做限定。
在本申请的另一些实施例中,对应于内侧镜面13的曲率半径值,外侧镜面14的曲率半径值具有相应的数值范围,在该数值范围内的外侧镜面14的每一曲率半径值分别和内侧镜面13的曲率半径值组合确定出相应的屈光度。
具体地,对于镜片10的内侧镜面13和外侧镜面14的面型参数的配合关系而言,可以将镜片主体11的内侧镜面13的面型参数的取值设定为N档,而对应于内侧镜面13的面型参数的每一档取值,外侧镜面14的面型参数的取值均具有相应的数值范围。
以下对镜片10的内侧镜面13的曲率半径值和外侧镜面14的曲率半径值的配合关系进行进一步说明:
表1列举了外侧镜面14为凸面,材料为MR-8,折射率为1.597,阿贝系数为40,中心厚度为2mm的镜片10的内外侧镜面14的曲率半径对应的镜片10屈光度及显示图像的虚像距的关系。
表1中,Φ表示屈光度,ra表示镜片10的外侧镜面14的曲率半径,rb表示镜片10的内侧镜面13的曲率半径,屈光度单位为D,虚像距为:显示图像形成于视网膜后在人眼前方形成的虚像距离视网膜的间距。
从表1来看,镜片10的内侧镜面13的曲率半径值在本实施例中,被分为了四个档位,分别是rb=255mm、rb=118mm、rb=82mm和rb=62mm。对应于上述的四个内侧镜面13的曲率半径值,均存在外侧镜面14的曲率半径值的数值范围与之对应。
在本实施例中,当内侧镜面13的曲率半径值rb为62.00mm时,外侧镜面14的曲率半径值ra的取值范围为191.01mm至361.53mm,屈光度的数值范围为-8.00D至-6.50D,显示图像的虚像距为0.13m。
而当内侧镜面13的曲率半径值rb为82.00mm时,外侧镜面14的曲率半径值ra的取值范围为182.48mm至566.50mm,屈光度的数值范围为-6.25D至-4.00D,显示图像的虚像距为0.16m。
当内侧镜面13的曲率半径值rb为118.00mm时,外侧镜面14的曲率半径值ra的取值范围为195.71mm至328.94mm,屈光度的数值范围为-3.25D至-2.00D,显示图像的虚像距为0.27m。
当内侧镜面13的曲率半径值rb为255.00mm时,外侧镜面14的曲率半径值ra的取值范围为255.75mm至997.23mm,屈光度的数值范围为0.00D至-1.75D,显示图像的虚像距为0.57m。
其中,屈光度绝对值的倒数是人眼的远点距离,也就是人眼能够看清的最远距离, 当屈光度为-1.00D时,人眼的远点距离为1m,当屈光度为-0.50D时,人眼的远点距离则为2m,近视度数则为屈光度的绝对值乘以100。
由上述示例可知,每个内侧镜面13的曲率半径值均对应有若干外侧镜面14的曲率半径值的取值,而由于光融合器12是设置于镜片10的内侧镜面13或是内侧镜面13和外侧镜面14之间的区域内,这样对应于某一个内侧镜面13的曲率半径值,显示图像的虚像距可以设计为固定不变,这也意味着在该内侧镜面13的曲率半径值,光融合器12的设置位置等规格参数无需随同近视度数的变化而做出改变,这样便降低了镜片10的应用成本,简化了镜片10在生产制造时的备样规格,使其易于批量化生产。
以rb=118.00这一档位为例,将ra=195.71mm,rb=118.00,d=2mm代入屈光度计算公式中,则得出屈光度为-2,人眼的远点距离约为0.50m。这样,将ra=289.52mm,rb=118.00,d=2mm代入上述公式中,则得出屈光度为-3.75,人眼的远点距离约为0.27m,那么在屈光度为-2~-3.75这一区间内,人眼的远点距离约为0.27m~0.50m,那么在制备镜片10时,即可将显示图像的虚像距统一设计为0.27m,以满足200度~375度的近视使用者均能够看清显示图像。
表1
Figure PCTCN2021114458-appb-000002
Figure PCTCN2021114458-appb-000003
如此便相当于在200度~375度的度数档位中,显示图像的虚像距统一设定为在该档位中,人眼视力最差能够看清的最远距离上,这样虚像距在该档位能够固定,一方面无需再调整光融合器12的设置位置等规格参数,简化了镜片10在生产制造时的备样规格,使其易于批量化生产。另一方面,用户在实际使用具有上述镜片10的设备时,其人眼能够同时较为舒适地清楚看到显示图像和外界真实环镜,通过镜片10看外界环境时不存在异物感,避免了看清显示图像时,外界真实环镜产生虚化,或者看清外界真实环镜时,显示图像产生虚化的情形发生。
在本申请的另一些实施例中,如图2~图4所示,光融合器12呈膜状,且贴设于内侧镜面13。具体地,光融合器12可以是呈平面或曲面的全息反射光融合器12,全息反射光融合器12为一层透明的聚合物薄膜(比如甲基丙酸甲酯薄膜),仅对特定角度、特定波长入射的光束起反射作用,其在原理上可以等效为一面凹面反射镜,对环境光有很高的透过率。这样光融合器12一方面能够实现对光引擎22所发出的光束组的有效反射,使其在视网膜上形成显示图像,另一方面也能够使得用户的眼镜透过光融合器12清楚地观察到外界环境。
而通过将光融合器12设置为膜状,这样其可贴设于镜片10的内侧镜面13,进而实现和镜片10简单可靠地结合,易于生产。如图2所示,而当光融合器12贴设于镜片10的内侧镜面13时,还可在其上覆设加硬膜121或加设一片平光镜片10,这样可实现对光融合器12的有效保护,避免其被外界飞沙等颗粒物所划伤。
可选地,全息反射光融合器12可采用实验全息的方式完成制备:具体是,将强相干激光光源射出的激光束分为两路,一路作为参考光,经过实体凹面反射镜反射后获得波前,或通过波前发生器直接生成反射后的波前,然后与另一路激光光束进行干涉,产生干涉图样,并将该干涉图样曝光到光敏全息膜中,获得等效于凹面反射镜的透明聚合物波前,最终形成全息反射光融合器12。
示例性地,光融合器12在实际应用时,也可以具有单一的全息功能层,光引擎22将至少一组光束组发射至该全息功能层上,从而通过该全息功能层的反射在视网膜上形成一个显示图像。同理地,光融合器12也可以具有多层反射不同波长的光束的全息功能层,引擎将多组不同波长的光束组发射至对应的全息功能层上,从而通过各全息功能层的反射在视网膜上形成多个显示图像。
在本申请的另一些实施例中,如图3和图4所示,光融合器12可设置于镜片主体 11的内侧镜面13和外侧镜面14之间的区域内,且和镜片主体11一体成型。具体地,有别于光融合器12贴设于镜片主体11的内侧镜面13的情形,光融合器12可在镜片10制作过程中,预先设置于镜片主体11的制备模具中的对应镜片主体11的内侧镜面13和外侧镜面14之间的区域的位置,在树脂液浇注于模具的过程中和镜片主体11一体成型。如此一方面可降低镜片10的制造成本,也便于镜片10的批量制造生产。另一方面,光融合器12设置于镜片主体11的内侧镜面13和外侧镜面14之间的区域,便可受到镜片主体11的保护,避免和外界环境相接触,从而能够避免光融合器12被外界飞沙等颗粒物所划伤。
在本申请的另一些实施例中,如图4所示,有别于光融合器12和镜片主体11一体成型的情形,在本实施例中,镜片主体11包括第一基片15和第二基片16,光融合器12则封设于第一基片15和第二基片16之间,第一基片15背向光融合器12的一面为外侧镜面14,第二基片16背向光融合器12的一面为内侧镜面13。
具体地,在镜片10的生产制造中,可先生产出第一基片15和第二基片16,再将光融合器12设置于第一基片15和第二基片16之间,并通过树脂固化于第一基片15和第二基片16之间。这样当需要采用透光度更高的玻璃镜片10,而玻璃镜片10无法实现和光融合器12一体成型时,采用上述的光融合器12和镜片10的结合方法,便可实现将光融合器12设置于玻璃材质的镜片10中。
如图5和图6所示,本申请实施例还提供了一种智能眼镜20,包括镜架21、光引擎22和上述的镜片10,镜片10设置于镜架21的镜框23内,光引擎22设置于镜架21的镜腿24上,并用于向光融合器12发射光束组。
具体地,智能眼镜20可以是增强现实眼镜或是混合现实眼镜等。
更具体地,镜片10的数量可以是两片,两片镜片10可分别嵌设于镜框23的两缺口内。而两片镜片10可以均具有光融合器12,也可以是单一镜片10具有光融合器12,光融合器12的数量与光引擎22的数量一致,且光融合器12可以占据镜片10的全部镜面或是部分镜面。
本申请实施例提供的智能眼镜20,由于包括有上述的镜片10,而镜片10通过其内外侧镜片主体11的面型参数的组合以及光融合器12与镜片主体11的配合,实现了更低的应用成本和更便利化的备样规格,如此也使得包括有上述的镜片10的智能眼镜20具有了更低的生产成本和更强的产品力。
在本申请的另一些实施例中,光引擎22包括光源模组、整形模组和微电子扫描振镜。其中,微电子扫描振镜靠近镜片10设置,整形模组位于微电子扫描振镜和光源模组之间。具体地,光源模组用于发射光束组,而整形模组则用于对自光源模组射出的光束组实现整形,微电子扫描振镜是基于微电子机械系统的扫描振镜,其用于将完成整形的光束组扫描投射至光融合器12。
光引擎22在工作时,其光源模组向整形模组发射光束组,整形模组则对光束组中各个光束进行整形,形成具有不同发散角的光束,完成整形后,具有不同发散角的光束通过微电子扫描振镜扫描反射至光融合器12,再由光融合器12反射至视网膜。
可选地,整形模组可以为可调焦光束整形组件,其可具体为液体透镜、微机械电子反射镜阵列以及可变焦透镜组等多种组件。而光源模组则可以是激光光源,采用激 光光源可降低光引擎22功耗,并提升显示图像的对比度,相较于采用LCOS(Liquid Crystal on Silicon,硅基液晶)、LED(Light Emitting Diode,发光二极管)或OLED(OrganicLight-Emitting Diode,有机发光二极管)的光源,能够省去照明组件和准直镜组,从而实现极简架构。相应地,光束组为激光光束组,光源模组可以是激光生成器,其包括有至少一个激光芯片,单个激光芯片用于产生一个波长的激光光束。当需要显示多个显示图像时,各个激光芯片可与多个整形模组实现一一对应的关系,而各个整形模组可以将与之对应的激光光束分时整形为具有至少两种发散角的激光光束。不同发散角的激光光束经过整形模组的分时整形处理,可通过多层全息功能层反射至视网膜,以形成多个显示图像。
可选地,激光芯片可以至少为红绿蓝(RGB,Red、Green、Blue)三色芯片,三色芯片可以是集成了红绿蓝三色的一体芯片,也可以是由三种单色芯片组合而成,激光芯片采用红绿蓝三色模式,通过对上述三色通道的变化和叠加来获得各种颜色。
可选地,微电子扫描振镜可为压电驱动的二维扫描振镜,其通过相互垂直的两扭力梁来带动反射镜扭动,以实现对激光光束的高通二维扫描反射,微电子扫描振镜也可以通过两个一维扫描振镜组合形成,并实现对激光光束的高通二维扫描反射。
在本申请的另一些实施例中,如图7和图8所示,智能眼镜20还包括中继光学物件25,中继光学物件25设置于微电子扫描振镜向光融合器12投射光束组的路径上,并用于对投射至光融合器12的光束组实现中继整形。具体地,中继光学物件25可对经过微电子扫描振镜投射至光融合器12的光束组实现矫正整形,也可将光束组的各光束分时整形为至少两种发散角的光束。
可选地,中继光学组件可以是光学透镜组或二元光学元件,中继光学组件可以为光引擎22的组成部分,也可独立于光引擎22存在,在本实施例中,为便于说明,将该中继光学组件绘制在光引擎22的外部。
在本申请的另一些实施例中,投射至光融合器12的光束组为准直光束组、发散光束组或汇聚光束组。具体地,光束组可以是准直光束组、发散光束组或汇聚光束组中的任一种。在本实施例中,考虑到对准直光束组进行矫正整形和分时整形较为容易,故光束组可具体选择为准直光束组。
在本申请的另一些实施例中,如图8所示,智能眼镜20还可包括显示控制器26,显示控制器26与光引擎22电连接,并用于向光源模组、整形模组和微电子扫描振镜发送显示图像的配置信息。具体是,显示控制器26可对显示图像进行解码与渲染,进而生成配置信息,光引擎22根据配置信息实现对显示图像的调制。
可选地,显示控制器26还可以与外界设备电连接,电连接方式包括有线或无线连接,无线连接包括WIFI连接或蓝牙连接等,这样显示控制器26可以从其他外界设备获取相关信息,以丰富配置信息,进而为光引擎22提供更准确的配置信息。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种镜片,其特征在于:包括镜片主体和光融合器;
    所述镜片主体包括内侧镜面和外侧镜面,所述内侧镜面的面型参数固定,所述外侧镜面的面型参数根据所述内侧镜面的面型参数取值,所述内侧镜面的面型参数和所述外侧镜面的面型参数组合确定所述镜片主体的屈光度;
    所述光融合器设置于所述镜片主体的内侧镜面,或者,所述光融合器设置于所述内侧镜面和所述外侧镜面之间的区域内,所述光融合器用于将光束组反射至眼球的视网膜,以形成显示图像。
  2. 根据权利要求1所述的镜片,其特征在于:所述内侧镜面的面型参数和所述外侧镜面的面型参数均为曲率半径值,所述内侧镜面的曲率半径值固定,所述外侧镜面的曲率半径值根据所述内侧镜面的曲率半径值取值,所述内侧镜面的曲率半径值和所述外侧镜面的曲率半径值组合确定所述镜片主体的屈光度。
  3. 根据权利要求2所述的镜片,其特征在于:对应于所述内侧镜面的曲率半径值,所述外侧镜面的曲率半径值具有相应的数值范围,在该数值范围内的所述外侧镜面的每一曲率半径值分别和所述内侧镜面的曲率半径值组合确定出相应的屈光度。
  4. 根据权利要求2所述的镜片,其特征在于:所述内侧镜面的曲率半径值为62.00mm,所述外侧镜面的曲率半径值的取值范围为191.01mm至361.53mm,所述屈光度的数值范围为-8.00D至-6.50D,所述显示图像的虚像距为0.13m。
  5. 根据权利要求2所述的镜片,其特征在于:所述内侧镜面的曲率半径值为82.00mm,所述外侧镜面的曲率半径值的取值范围为182.48mm至566.50mm,所述屈光度的数值范围为-6.25D至-4.00D,所述显示图像的虚像距为0.16m。
  6. 根据权利要求2所述的镜片,其特征在于:所述内侧镜面的曲率半径值为
    118.00mm,所述外侧镜面的曲率半径值的取值范围为195.71mm至328.94mm,所述屈光度的数值范围为-3.25D至-2.00D,所述显示图像的虚像距为0.27m。
  7. 根据权利要求2所述的镜片,其特征在于:所述内侧镜面的曲率半径值为255.00mm,所述外侧镜面的曲率半径值的取值范围为255.75mm至997.23mm,所述屈光度的数值范围为-1.75D至0.00D,所述显示图像的虚像距为0.57m。
  8. 根据权利要求1~7任一项所述的镜片,其特征在于:所述光融合器呈膜状,且贴设于所述内侧镜面。
  9. 根据权利要求1~7任一项所述的镜片,其特征在于:所述光融合器设置于所述内侧镜面和所述外侧镜面之间的区域内,且和所述镜片主体一体成型。
  10. 根据权利要求1~7任一项所述的镜片,其特征在于:所述镜片主体包括第一基片和第二基片,所述光融合器设置于所述第一基片和所述第二基片之间,所述第一基片背向所述光融合器的一面为所述外侧镜面,所述第二基片背向所述光融合器的一面为所述内侧镜面。
  11. 一种智能眼镜,其特征在于:包括镜架、光引擎和权利要求1~10任一项所述的镜片,所述镜片设置于所述镜架的镜框内,所述光引擎设置于所述镜架的镜腿上,并用于向所述光融合器发射光束组。
  12. 根据权利要求11所述的智能眼镜,其特征在于:所述光引擎包括光源模组、整形模组和微电子扫描振镜;
    所述微电子扫描振镜靠近所述镜片设置,所述整形模组位于所述微电子扫描振镜和所述光源模组之间;
    所述光源模组用于发射所述光束组;
    所述整形模组用于对自所述光源模组射出的所述光束组实现整形;
    所述微电子扫描振镜用于将完成整形的所述光束组扫描投射至所述光融合器。
  13. 根据权利要求12所述的智能眼镜,其特征在于:所述智能眼镜还包括中继光学物件,所述中继光学物件设置于所述微电子扫描振镜向所述光融合器投射所述光束组的路径上,并用于对投射至所述光融合器的所述光束组实现整形。
  14. 根据权利要求11~13任一项所述的智能眼镜,其特征在于:所述光束组为准直光束组、发散光束组或汇聚光束组。
  15. 根据权利要求11~13任一项所述的智能眼镜,其特征在于:所述智能眼镜为增强现实眼镜或混合现实眼镜。
PCT/CN2021/114458 2020-10-29 2021-08-25 镜片及智能眼镜 WO2022088887A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202022459665.2 2020-10-29
CN202022459665.2U CN214011669U (zh) 2020-10-29 2020-10-29 镜片及智能眼镜

Publications (1)

Publication Number Publication Date
WO2022088887A1 true WO2022088887A1 (zh) 2022-05-05

Family

ID=77302021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114458 WO2022088887A1 (zh) 2020-10-29 2021-08-25 镜片及智能眼镜

Country Status (2)

Country Link
CN (1) CN214011669U (zh)
WO (1) WO2022088887A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN214011669U (zh) * 2020-10-29 2021-08-20 华为技术有限公司 镜片及智能眼镜

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014209431A1 (en) * 2013-06-27 2014-12-31 Fusao Ishii Wearable display
CN106170729A (zh) * 2013-03-25 2016-11-30 英特尔公司 用于具有多个出射光瞳的头戴显示器的方法和设备
CN107466375A (zh) * 2015-04-03 2017-12-12 依视路国际集团(光学总公司) 用于增强现实的方法和系统
CN109073897A (zh) * 2016-07-12 2018-12-21 依视路国际公司 用于为电子信息装置提供显示装置的方法
CN109459859A (zh) * 2018-12-21 2019-03-12 舒伟 一种近眼显示系统及眼镜式虚拟显示器
CN109633905A (zh) * 2018-12-29 2019-04-16 华为技术有限公司 多焦平面显示系统以及设备
CN214011669U (zh) * 2020-10-29 2021-08-20 华为技术有限公司 镜片及智能眼镜

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170729A (zh) * 2013-03-25 2016-11-30 英特尔公司 用于具有多个出射光瞳的头戴显示器的方法和设备
WO2014209431A1 (en) * 2013-06-27 2014-12-31 Fusao Ishii Wearable display
CN107466375A (zh) * 2015-04-03 2017-12-12 依视路国际集团(光学总公司) 用于增强现实的方法和系统
CN109073897A (zh) * 2016-07-12 2018-12-21 依视路国际公司 用于为电子信息装置提供显示装置的方法
CN109459859A (zh) * 2018-12-21 2019-03-12 舒伟 一种近眼显示系统及眼镜式虚拟显示器
CN109633905A (zh) * 2018-12-29 2019-04-16 华为技术有限公司 多焦平面显示系统以及设备
CN214011669U (zh) * 2020-10-29 2021-08-20 华为技术有限公司 镜片及智能眼镜

Also Published As

Publication number Publication date
CN214011669U (zh) 2021-08-20

Similar Documents

Publication Publication Date Title
US11378803B2 (en) Methods and systems for augmented reality
CN107771297B (zh) 用于虚拟和增强现实近眼显示器的自由曲面型纳米结构表面
US10416456B2 (en) Methods and systems for augmented reality
CN105572877B (zh) 一种头戴式增强现实智能显示装置
CN205139487U (zh) 相机
US10429650B2 (en) Head-mounted display
US8384999B1 (en) Optical modules
US10746994B2 (en) Spherical mirror having a decoupled aspheric
JP2022544895A (ja) 体積ブラッグ格子ベースの導波路ディスプレイにおける分散補償
KR101883221B1 (ko) 픽셀 렌즈를 갖춘 콜리메이팅 디스플레이
US10429648B2 (en) Augmented reality head worn device
US9632312B1 (en) Optical combiner with curved diffractive optical element
KR20180045864A (ko) 증강 현실을 위한 방법 및 시스템
WO2018103551A1 (zh) 一种自由曲面棱镜组及使用其的近眼显示装置
WO2019062480A1 (zh) 近眼光学成像系统、近眼显示装置及头戴式显示装置
CN107850781A (zh) 具有处方整合的自由空间光学组合器
JP2002287077A (ja) 映像表示装置
TWI789404B (zh) 導光板及圖像顯示裝置
CN217484607U (zh) 一种头戴设备
WO2022088887A1 (zh) 镜片及智能眼镜
CN116974082A (zh) 一种屈光度可调的近眼显示光学装置
JP7183611B2 (ja) 虚像表示装置
US20230393399A1 (en) Zonal lenses for a head-mounted display (hmd) device
CN218917805U (zh) 一种近眼显示系统及近眼显示装置
CN115793245A (zh) 一种近眼显示系统及近眼显示装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884619

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884619

Country of ref document: EP

Kind code of ref document: A1