WO2018179018A1 - Method and device for viewing augmented reality images - Google Patents

Method and device for viewing augmented reality images Download PDF

Info

Publication number
WO2018179018A1
WO2018179018A1 PCT/IT2018/000048 IT2018000048W WO2018179018A1 WO 2018179018 A1 WO2018179018 A1 WO 2018179018A1 IT 2018000048 W IT2018000048 W IT 2018000048W WO 2018179018 A1 WO2018179018 A1 WO 2018179018A1
Authority
WO
WIPO (PCT)
Prior art keywords
digital image
screen
processing unit
user
viewing
Prior art date
Application number
PCT/IT2018/000048
Other languages
French (fr)
Inventor
Fabio MASCI
Original Assignee
THE EDGE COMPANY S.r.l.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by THE EDGE COMPANY S.r.l. filed Critical THE EDGE COMPANY S.r.l.
Publication of WO2018179018A1 publication Critical patent/WO2018179018A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/001Constructional or mechanical details

Definitions

  • This invention relates to a method and a device for viewing augmented reality images. This invention also relates to a computer program.
  • Augmented Reality is a technology that allows the combination of digital data generated by an electronic device with the real environment.
  • information linked to a particular real object becomes interactive and human sensory perception is enhanced, acting by means of realtime overlaying and modification of the flow of information arriving from an image detector, for example a camera. This happens by means of algorithms for the recognition of real objects that allow alignment of the image of the object generated by the computer with the actual object.
  • a first method involves framing the surrounding environment by means of a camera of a portable device, for example a smartphone or a tablet and, by means of a program inserted in the portable device, recognising an image framed and based on the image framed generating digital content which is overlaid on the real environment.
  • a second method involves the use of visors, also called smart glasses, which are constituted of sort of spectacles or a helmet equipped with one or more transparent screens, which are positioned in front of each eye of the user, and on which the "virtual" images are reproduced.
  • the virtual images are then viewed at the real object at a predetermined distance from the user, so as to overlay them on the real object. Overlaying is usually calculated based on a distance of approximately 8 metres from the real object.
  • the virtual images can be stereoscopic, that is to say, give a three-dimensional effect similar to that created by the human binocular vision system.
  • the scene is filmed using two cameras which are spaced at a distance of around 65 mm, corresponding to the normal distance between two eyes.
  • the stereoscopic effect is obtained using "virtual" cameras, that is to say, special programs suitable for filming a digital image from two different viewpoints, "right” and “left”, in such a way as to generate two images that are slightly different from each other depending on the lateral shifting of one virtual camera relative to the other.
  • the stereoscopic effect is lost or the virtual images viewed are not clear and in focus.
  • the virtual image may even be seen as a double image.
  • the aim of this invention is to improve augmented reality image viewing, allowing them to be viewed as clear and in focus.
  • Another aim of this invention is to allow optimum overlaying of augmented reality images on the real object.
  • the invention achieves those aims thanks to a method for viewing augmented reality images with the features defined in claim 1.
  • the invention is based on the fact that the eye of the user may not correspond to the point of the screen onto which the virtual image is projected in such a way as to correctly view the image in the viewing area located at a predetermined distance from the user.
  • the eye may be at a different height or laterally shifted relative to the screen. It should be considered that when using augmented reality visors, the eye of the user tends to focus on the real objects, but at the same time it has to view the virtual images that are projected onto the screen. If the reproduction of the digital image is not perfectly calibrated relative to the eye of the user, the user will view an image higher or lower than the real object or at a different depth.
  • the calibration In the case of viewing stereoscopic images, the calibration must take into account both eyes and their relative distance and height. An asymmetry of the eyes may mean separately viewing one virtual image relative to the other and, therefore, not just absence of the stereoscopic effect, but also doubling of the image to be viewed.
  • the method comprises shifting the image at least in a vertical direction. This allows compensation of the height difference between the eye and the centre of the screen and/or the height difference between the eyes of the user and centres the virtual image relative to the real object.
  • the method comprises shifting the image at least in a horizontal direction. This allows compensation of the offset on the horizontal axis between the eye and the screen, for correctly rendering the depth of the virtual image.
  • the digital image is shifted on the screen.
  • the image is preferably shifted by adjusting the position of the virtual camera.
  • the image is shifted by adjusting the position of the screen and/or of the projector. Therefore, the adjustment is quick and easy, allowing the user to clearly focus on and view the virtual image.
  • an adjustment is made to the position of a first image relative to a second image.
  • the adjustment is made by shifting one virtual camera relative to the other. In this way, it is possible to vary the alignment between the two images which will form the three-dimensional image, to compensate for an asymmetry of the eyes relative to the screens of the visor.
  • the invention also relates to a device for viewing augmented reality images with the features defined in claim 9.
  • Figure 1 illustrates an example embodiment of a device for viewing augmented reality images
  • Figure 2 schematically illustrates a step of a first embodiment of the method for viewing augmented reality images
  • Figure 3 schematically illustrates a step of a second embodiment of the method for viewing augmented reality images
  • Figure 4 schematically illustrates a step of a third embodiment of the method for viewing augmented reality images.
  • the numeral I denotes a device for viewing augmented reality images. That device comprises a visor 2, preferably spectacle-shaped, and a processing unit 3, connectable to the visor 2, for example by means of a cable 4. Alternatively, the processing unit 3 may be integrated in the visor 2 or wirelessly connected to the visor 2.
  • the visor 2 comprises at least one supporting element 5 for at least one substantially transparent or semi-transparent surface 6, configured like a spectacle lens, positioned at the eye of a user.
  • the supporting element 5 comprises a first side arm 51 and a second side arm 52, each connected to a respective substantially transparent or semi-transparent surface 6, 7.
  • the supporting element comprises a substantially circular support suitable for being positioned on the head of a user.
  • the device 1 also comprises an image capturing unit 8 for capturing an image of a real object, for example a stills or video camera, preferably located at a side arm 51. Alternatively, the image capturing unit 8 may be located in a central position relative to the side arms 51 , 52.
  • the device Connected to the supporting element 5 the device comprises at least one projector 9 for projecting at least one image onto a respective screen 10 located at one of the transparent surfaces 6.
  • the screen 10 is substantially transparent or semi-transparent, for allowing the user to see the surrounding environment.
  • the screen 10 may preferably be constituted of a prism or a translucent lens that is reflective or transparent depending on the viewing angle, allowing a view of the real environment and simultaneously the virtual content like an LCD screen.
  • the device 1 comprises at least one optical system 1 1 for viewing at a predetermined distance from the user the images projected onto the screen 10 positioned near the eye of the user.
  • the processing unit 3 is designed to generate digital images starting from the images of a real object captured by the image capturing unit 8, by means of special programs. Those digital images are sent to the projector 9 and then to the screen 10.
  • the processing unit 3 comprises at least one virtual camera 12 ( Figures 2 and 3), that is to say, a program for filming the image generated on the computer of an object, that is to say, a virtual object.
  • the virtual camera 12 has the same functionality as a real camera, but acts on a virtual object and generates a digital image of the virtual object which depends on the parameters used for the filming, for example the angle, the zoom, the focal length. Those parameters are managed by the processing unit 3.
  • the device 1 preferably comprises a first projector 9 connected to the first side arm 51 of the supporting element 5 and a second projector 13 connected to the second side arm 52 of the supporting element 5.
  • the digital images are projected onto a respective first screen 10 and second screen 14 and viewed at a distance by means of a first optical system 1 1 and a second optical system 15. Since the projectors 9, 13 are positioned at a predetermined distance from each other, it is possible to obtain a stereoscopic effect and to view the images at a distance in three-dimensions.
  • the device 1 also comprises two virtual cameras 12, 16 ( Figures 2 and 3), which are positioned at different distances by the processing unit 3 in such a way as to film a virtual object from different viewpoints and to generate two similar images of the same virtual object which are then combined to obtain the stereoscopic effect.
  • the device 1 is designed in such a way that each screen 10, 14 is located in front of one eye of the user. Therefore, starting with a real object framed by the image capturing unit 8, by means of the processing unit 3 a virtual object is generated which is filmed by the two virtual cameras 12, 16 in such a way as to supply two similar digital images of the virtual object which are located at the same height but are not completely overlaid one on top of the other. Each image is projected towards the respective screen 10, 14 and, thanks to the optical systems 1 1, 15, the user perceives only a single three-dimensional image which will be perfectly overlaid over the real image that the user sees through the substantially transparent surfaces 6, 7 of the device.
  • the screens 10, 14 are at a different height relative to the eyes of the user, the user will perceive an image located higher or lower than the real object, or two digital images located at different heights which will, therefore, make the final virtual image unclear and out of focus. If the screens are laterally shifted relative to the eyes of the user, the user will perceive an image depth that does not correspond to the real object.
  • the method according to this invention comprises shifting the digital image in such a way that the digital image is overlaid on top of the real object.
  • the method comprises shifting the centre of the digital image.
  • the image is shifted at least in a vertical direction and/or in a horizontal direction.
  • the digital image is shifted directly relative to the real object, and not relative to the image of the object captured and processed by the device.
  • the digital image is aligned with the real object independently of the image capturing unit.
  • the alignment with the real object could be performed by projecting a pre-selected image onto the viewing surface, independently of the capturing of the real object and processing of the image of the object.
  • the image capturing unit may be used for calculating the distance between the user and the real object, and for performing the corresponding calibration.
  • the projection of the digital image is set to a predetermined distance between user and real object, for example 8 metres, when the user moves towards or away from the object, the digital image could no longer coincide with the real object. In this case, adjustment of the position should occur each time during the movement of the user.
  • That adjustment corresponding to the intermediate distance is preferably preset in the device. In this way, the user can obtain a clear image that is overlaid on top of the real object even when he or she moves to different distances from the object.
  • the method may be used for viewing both two-dimensional and three- dimensional images.
  • the virtual image can be shifted in such a way that it is always overlaid on top of the real object, and is not shifted vertically or at a different depth. This may occur if the eye of the user is shifted relative to the screen.
  • a digital image is not aligned relative to an image of the real object, but a digital image is aligned relative to the real object, so as to be overlaid on top of the object.
  • the method is advantageously applied in the case of three-dimensional images, in which two different images are generated and projected, increasing the problem of focusing if each eye does not correspond to the respective screen.
  • the position of the first digital image is adjusted relative to the second digital image, in such a way that the two digital images are perfectly overlaid one on top of the other and are clear when viewed.
  • what is projected onto the viewing surface could be two digital images deriving from different filming of a pre-selected virtual object, independently of the image capturing of the real object and of the processing of the image of the object.
  • the adjustment that allows the two digital images, left and right, to be overlaid could even occur without having the real object as a reference.
  • the position of the latter is adjusted relative to the real object in such a way that it is overlaid on top of the real object.
  • one virtual camera 16 is shifted vertically, and therefore frames the virtual object from a different position. This results in a vertical shifting of the virtual object which is projected onto the respective screen 14, compensating for the offset between eye and screen. Therefore, the correction occurs during processing of the images by means of the virtual cameras 12, 16.
  • the virtual cameras 12, 16 frame the virtual object in such a way as to generate two digital images (step (a)).
  • the centre of each image, on the respective screen, is at the same height (step (b)).
  • the centre of the digital image on one of the two screens is shifted vertically (step (c)).
  • the method according to this second embodiment may be implemented both at application and operating system level.
  • the application will supply correctly calibrated images for the left and right eye.
  • the application will supply the images perfectly aligned on the horizontal axis and it will be the operating system, or another application, which will manage the vertical position of the images.
  • adjusting means 17 are connected to the supporting element 5 of the visor 2.
  • the visor 2 may be constituted of a button or a wheel.
  • the method therefore allows vertical translation of one digital image relative to the other in such a way as to compensate for the difference in the height of the eyes relative to the screens.
  • the digital images are therefore viewed perfectly overlaid one on top of the other, therefore as a single, clear and in focus three-dimensional image.
  • the horizontal translation of one of the images allows the virtual image to be precisely overlaid on top of the real object, compensating for any differences between the device design distance and the actual distance between the user and the real object.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for viewing augmented reality images by means of a device (1) comprising a visor (2) designed to be positioned by a user at the eyes and a processing unit (3) connectable to said visor (2), wherein said visor (2) comprises at least one screen (10), at least one projector (9) of at least one digital image towards said screen (10) and at least one optical system (11) for viewing said digital image on a viewing area located at a real object at a predetermined distance from the user, and wherein said processing unit (3) is designed to generate at least one digital image to be sent to said projector (9), comprises a step of adjusting the position of said digital image in which said digital image, preferably the centre of said digital image, is shifted in such a way that it is overlaid on top of the real object.

Description

METHOD AND DEVICE FOR VIEWING AUGMENTED REALITY
IMAGES
Description
Technical Field
This invention relates to a method and a device for viewing augmented reality images. This invention also relates to a computer program.
Background Art
Augmented Reality (AR) is a technology that allows the combination of digital data generated by an electronic device with the real environment. By means of augmented reality engines, information linked to a particular real object becomes interactive and human sensory perception is enhanced, acting by means of realtime overlaying and modification of the flow of information arriving from an image detector, for example a camera. This happens by means of algorithms for the recognition of real objects that allow alignment of the image of the object generated by the computer with the actual object.
At present, there are two methods for viewing augmented reality images. A first method involves framing the surrounding environment by means of a camera of a portable device, for example a smartphone or a tablet and, by means of a program inserted in the portable device, recognising an image framed and based on the image framed generating digital content which is overlaid on the real environment.
A second method involves the use of visors, also called smart glasses, which are constituted of sort of spectacles or a helmet equipped with one or more transparent screens, which are positioned in front of each eye of the user, and on which the "virtual" images are reproduced. The virtual images are then viewed at the real object at a predetermined distance from the user, so as to overlay them on the real object. Overlaying is usually calculated based on a distance of approximately 8 metres from the real object.
In the most recent visors the virtual images can be stereoscopic, that is to say, give a three-dimensional effect similar to that created by the human binocular vision system. In order to create real stereoscopic images, the scene is filmed using two cameras which are spaced at a distance of around 65 mm, corresponding to the normal distance between two eyes. In the case of virtual images generated by the computer in three-dimensional graphics, the stereoscopic effect is obtained using "virtual" cameras, that is to say, special programs suitable for filming a digital image from two different viewpoints, "right" and "left", in such a way as to generate two images that are slightly different from each other depending on the lateral shifting of one virtual camera relative to the other.
Viewing augmented reality images by means of visors is not always optimum. In fact, it is not always possible to have perfect overlaying of the real object and the digital image.
Moreover, in the case of three-dimensional images, it may be the case that the stereoscopic effect is lost or the virtual images viewed are not clear and in focus. The virtual image may even be seen as a double image.
Such effects make the use of visors unsatisfactory or even troublesome.
Disclosure of the Invention
The aim of this invention is to improve augmented reality image viewing, allowing them to be viewed as clear and in focus. Another aim of this invention is to allow optimum overlaying of augmented reality images on the real object. The invention achieves those aims thanks to a method for viewing augmented reality images with the features defined in claim 1.
The invention is based on the fact that the eye of the user may not correspond to the point of the screen onto which the virtual image is projected in such a way as to correctly view the image in the viewing area located at a predetermined distance from the user. The eye may be at a different height or laterally shifted relative to the screen. It should be considered that when using augmented reality visors, the eye of the user tends to focus on the real objects, but at the same time it has to view the virtual images that are projected onto the screen. If the reproduction of the digital image is not perfectly calibrated relative to the eye of the user, the user will view an image higher or lower than the real object or at a different depth.
In the case of viewing stereoscopic images, the calibration must take into account both eyes and their relative distance and height. An asymmetry of the eyes may mean separately viewing one virtual image relative to the other and, therefore, not just absence of the stereoscopic effect, but also doubling of the image to be viewed.
With the method according to this invention it is possible to easily calibrate the visor in such a way as to shift the virtual image in order to compensate for the difference in the position between the eye and the screen and/or the asymmetry between the two eyes.
Advantageously, the method comprises shifting the image at least in a vertical direction. This allows compensation of the height difference between the eye and the centre of the screen and/or the height difference between the eyes of the user and centres the virtual image relative to the real object.
Preferably, the method comprises shifting the image at least in a horizontal direction. This allows compensation of the offset on the horizontal axis between the eye and the screen, for correctly rendering the depth of the virtual image.
According to one advantageous embodiment, the digital image is shifted on the screen. In devices equipped with a virtual camera, the image is preferably shifted by adjusting the position of the virtual camera. Alternatively, the image is shifted by adjusting the position of the screen and/or of the projector. Therefore, the adjustment is quick and easy, allowing the user to clearly focus on and view the virtual image.
In devices in which three-dimensional images are viewed, an adjustment is made to the position of a first image relative to a second image. Preferably, the adjustment is made by shifting one virtual camera relative to the other. In this way, it is possible to vary the alignment between the two images which will form the three-dimensional image, to compensate for an asymmetry of the eyes relative to the screens of the visor.
The invention also relates to a device for viewing augmented reality images with the features defined in claim 9.
Brief Description of the Drawings
Further advantages and features of this invention are more apparent in the detailed description which follows, with reference to the accompanying drawings, which illustrate an example of it without limiting the scope of the invention, in which: Figure 1 illustrates an example embodiment of a device for viewing augmented reality images;
Figure 2 schematically illustrates a step of a first embodiment of the method for viewing augmented reality images;
Figure 3 schematically illustrates a step of a second embodiment of the method for viewing augmented reality images;
Figure 4 schematically illustrates a step of a third embodiment of the method for viewing augmented reality images.
Detailed Description of Preferred Embodiment of the Invention
In Figure 1 the numeral I denotes a device for viewing augmented reality images. That device comprises a visor 2, preferably spectacle-shaped, and a processing unit 3, connectable to the visor 2, for example by means of a cable 4. Alternatively, the processing unit 3 may be integrated in the visor 2 or wirelessly connected to the visor 2.
The visor 2 comprises at least one supporting element 5 for at least one substantially transparent or semi-transparent surface 6, configured like a spectacle lens, positioned at the eye of a user. In Figure 1, the supporting element 5 comprises a first side arm 51 and a second side arm 52, each connected to a respective substantially transparent or semi-transparent surface 6, 7. In an alternative embodiment, not illustrated, the supporting element comprises a substantially circular support suitable for being positioned on the head of a user. The device 1 also comprises an image capturing unit 8 for capturing an image of a real object, for example a stills or video camera, preferably located at a side arm 51. Alternatively, the image capturing unit 8 may be located in a central position relative to the side arms 51 , 52.
Connected to the supporting element 5 the device comprises at least one projector 9 for projecting at least one image onto a respective screen 10 located at one of the transparent surfaces 6. Advantageously, the screen 10 is substantially transparent or semi-transparent, for allowing the user to see the surrounding environment. The screen 10 may preferably be constituted of a prism or a translucent lens that is reflective or transparent depending on the viewing angle, allowing a view of the real environment and simultaneously the virtual content like an LCD screen.
The device 1 comprises at least one optical system 1 1 for viewing at a predetermined distance from the user the images projected onto the screen 10 positioned near the eye of the user.
The processing unit 3 is designed to generate digital images starting from the images of a real object captured by the image capturing unit 8, by means of special programs. Those digital images are sent to the projector 9 and then to the screen 10.
The processing unit 3 comprises at least one virtual camera 12 (Figures 2 and 3), that is to say, a program for filming the image generated on the computer of an object, that is to say, a virtual object. The virtual camera 12 has the same functionality as a real camera, but acts on a virtual object and generates a digital image of the virtual object which depends on the parameters used for the filming, for example the angle, the zoom, the focal length. Those parameters are managed by the processing unit 3.
In the embodiment in Figure 1 the device 1 preferably comprises a first projector 9 connected to the first side arm 51 of the supporting element 5 and a second projector 13 connected to the second side arm 52 of the supporting element 5. The digital images are projected onto a respective first screen 10 and second screen 14 and viewed at a distance by means of a first optical system 1 1 and a second optical system 15. Since the projectors 9, 13 are positioned at a predetermined distance from each other, it is possible to obtain a stereoscopic effect and to view the images at a distance in three-dimensions. The device 1 also comprises two virtual cameras 12, 16 (Figures 2 and 3), which are positioned at different distances by the processing unit 3 in such a way as to film a virtual object from different viewpoints and to generate two similar images of the same virtual object which are then combined to obtain the stereoscopic effect.
The device 1 is designed in such a way that each screen 10, 14 is located in front of one eye of the user. Therefore, starting with a real object framed by the image capturing unit 8, by means of the processing unit 3 a virtual object is generated which is filmed by the two virtual cameras 12, 16 in such a way as to supply two similar digital images of the virtual object which are located at the same height but are not completely overlaid one on top of the other. Each image is projected towards the respective screen 10, 14 and, thanks to the optical systems 1 1, 15, the user perceives only a single three-dimensional image which will be perfectly overlaid over the real image that the user sees through the substantially transparent surfaces 6, 7 of the device.
However, if the screens 10, 14 are at a different height relative to the eyes of the user, the user will perceive an image located higher or lower than the real object, or two digital images located at different heights which will, therefore, make the final virtual image unclear and out of focus. If the screens are laterally shifted relative to the eyes of the user, the user will perceive an image depth that does not correspond to the real object.
The method according to this invention comprises shifting the digital image in such a way that the digital image is overlaid on top of the real object. Preferably, the method comprises shifting the centre of the digital image. Advantageously, the image is shifted at least in a vertical direction and/or in a horizontal direction. According to this invention the digital image is shifted directly relative to the real object, and not relative to the image of the object captured and processed by the device. In other words, the digital image is aligned with the real object independently of the image capturing unit. The alignment with the real object could be performed by projecting a pre-selected image onto the viewing surface, independently of the capturing of the real object and processing of the image of the object.
In contrast, the image capturing unit may be used for calculating the distance between the user and the real object, and for performing the corresponding calibration.
If the projection of the digital image is set to a predetermined distance between user and real object, for example 8 metres, when the user moves towards or away from the object, the digital image could no longer coincide with the real object. In this case, adjustment of the position should occur each time during the movement of the user.
According to an advantageous embodiment of the method, there is a first adjustment of the position of the image at a first distance from the real object, in particular at a near distance; a second adjustment of the position of image at a second distance from the real object which is different to the first, in particular at a distance greater than the first; and a calculation of the adjustment of the position of the image corresponding to an intermediate distance. That adjustment corresponding to the intermediate distance is preferably preset in the device. In this way, the user can obtain a clear image that is overlaid on top of the real object even when he or she moves to different distances from the object.
The method may be used for viewing both two-dimensional and three- dimensional images. In the case of two-dimensional images, the virtual image can be shifted in such a way that it is always overlaid on top of the real object, and is not shifted vertically or at a different depth. This may occur if the eye of the user is shifted relative to the screen. Again in this case, a digital image is not aligned relative to an image of the real object, but a digital image is aligned relative to the real object, so as to be overlaid on top of the object.
The method is advantageously applied in the case of three-dimensional images, in which two different images are generated and projected, increasing the problem of focusing if each eye does not correspond to the respective screen.
In the case of stereoscopic digital images, there may be a problem not just with alignment with the real object, but also because the virtual image is not itself viewed clearly or it is seen as a double image.
According to an advantageous embodiment of the method, the position of the first digital image is adjusted relative to the second digital image, in such a way that the two digital images are perfectly overlaid one on top of the other and are clear when viewed. Again in this case, what is projected onto the viewing surface could be two digital images deriving from different filming of a pre-selected virtual object, independently of the image capturing of the real object and of the processing of the image of the object. In other words, the adjustment that allows the two digital images, left and right, to be overlaid could even occur without having the real object as a reference. Once the two digital images have been aligned with each other in such a way as to have optimum viewing as a single three-dimensional digital image, the position of the latter is adjusted relative to the real object in such a way that it is overlaid on top of the real object.
The figures illustrate different preferred embodiments of the method according to this invention, which are applied to three-dimensional images obtained using stereoscopic techniques, in which the device requires vertical calibration.
In a first embodiment of the method according to this invention, illustrated in Figure 2, the centre of the digital image is shifted in a vertical direction by acting on a virtual camera 16.
In the processing unit 3, and in particular in the instructions for framing the virtual object by means of the virtual cameras 12, 16, one virtual camera 16 is shifted vertically, and therefore frames the virtual object from a different position. This results in a vertical shifting of the virtual object which is projected onto the respective screen 14, compensating for the offset between eye and screen. Therefore, the correction occurs during processing of the images by means of the virtual cameras 12, 16.
In a second embodiment of the method according to this invention, illustrated in Figure 3, the virtual cameras 12, 16 frame the virtual object in such a way as to generate two digital images (step (a)). The centre of each image, on the respective screen, is at the same height (step (b)). In order to compensate for the offset between eye and screen, the centre of the digital image on one of the two screens is shifted vertically (step (c)).
The method according to this second embodiment may be implemented both at application and operating system level. In the former case the application will supply correctly calibrated images for the left and right eye. In contrast, in the latter the application will supply the images perfectly aligned on the horizontal axis and it will be the operating system, or another application, which will manage the vertical position of the images.
In a third embodiment of the method according to this invention, illustrated in Figure 4, action is taken directly on the position of one screen 14 and/or of one projector 13, using adjusting means 17 for vertically translating the screen 14 and/or the projector 13 so that the digital image viewed is clear and is overlaid on top of the real object. Advantageously, the adjusting means 17 are connected to the supporting element 5 of the visor 2. For example, they may be constituted of a button or a wheel.
In the case of three-dimensional images, the method therefore allows vertical translation of one digital image relative to the other in such a way as to compensate for the difference in the height of the eyes relative to the screens. The digital images are therefore viewed perfectly overlaid one on top of the other, therefore as a single, clear and in focus three-dimensional image. The horizontal translation of one of the images allows the virtual image to be precisely overlaid on top of the real object, compensating for any differences between the device design distance and the actual distance between the user and the real object.

Claims

A method for viewing augmented reality images by means of a device (1) comprising a visor (2) designed to be positioned by a user at the eyes and a processing unit (3) connectable to said visor (2), wherein said visor (2) comprises at least one screen (10), at least one projector (9) of at least one digital image towards said screen (10) and at least one optical system (1 1) for viewing said digital image on a viewing area located at a real object at a predetermined distance from the user, and wherein said processing unit (3) is designed to generate at least one digital image to be sent to said projector (9), wherein the method comprises a step of adjusting the position of said digital image in which said digital image, preferably the centre of said digital image, is shifted in such a way that it is overlaid on top of the real object.
The method according to claim 1, characterised in that said digital image is shifted at least in a vertical direction and/or at least in a horizontal direction.
The method according to claim 1 or 2, characterised in that the position of said digital image on said screen (10) is shifted.
The method according to any of the preceding claims, wherein said processing unit (3) comprises at least one virtual camera (12) for filming a virtual object generated by said processing unit (3) and generating a digital image of said virtual object to be sent to the screen (10), characterised in that said digital image is shifted by adjusting the position of said virtual camera (12).
The method according to any of the preceding claims, characterised in that said digital image is shifted by adjusting the position of said screen (10) and/or of said projector (9).
The method according to any of the preceding claims, wherein said device comprises at least one first screen (10) designed to be positioned substantially in front of one eye of a user; at least one second screen (14) designed to be positioned substantially in front of the other eye of the user; at least one first projector (9) of a first digital image towards said first screen (10), at least one second projector (13) of a second digital image substantially corresponding to said first digital image towards said second screen (14), at least one first optical system (1 1) for viewing said first digital image on a viewing area located at a predetermined distance from the user and at least one second optical system (15) for viewing said second digital image on the viewing area located at a predetermined distance from the user, characterised in that the position of said first digital image relative to said second digital image is adjusted in such a way that said digital images are overlaid one on top of the other in the viewing area.
The method according to claim 6, wherein said processing unit (3) comprises a first virtual camera (12) for filming a virtual object generated by said processing unit (3) and generating said first digital image of said virtual object to be sent to said first screen (10), and a second virtual camera (16) for filming a virtual object generated by said processing unit (3) and generating said second digital image of said virtual object to be sent to said second screen (14), characterised in that one of said digital images is shifted by adjusting the position of one virtual camera ( 12) relative to the other (16).
8. A computer program, characterised in that it comprises instructions for carrying out the step of adjusting the position of said digital image in the method according to any of the preceding claims.
9. A device for viewing augmented reality images comprising a visor (2) designed to be positioned by a user at the eyes, wherein said visor (2) comprises at least one screen (10), preferably transparent or semi- transparent, at least one projector (9) of at least one digital image towards said screen (10) and at least one optical system (1 1 ) for viewing said digital image on a viewing area located at a real object at a predetermined distance from the user, wherein said device (1 ) comprises a processing unit (3) connectable to said visor (2) and designed to generate at least one digital image to be sent to said projector (9), characterised in that the said processing unit (3) is designed to shift said digital image, preferably the centre of said digital image, in such a way that said digital image is overlaid on top of the real object.
10. The device according to claim 9, characterised in that said processing unit (3) is designed to shift said digital image at least in a vertical direction and/or at least in a horizontal direction.
1 1 . The device according to claim 9 or 10, characterised in that said processing unit (3) comprises at least one virtual camera (12) for filming a virtual object generated by said processing unit (3) and generating a digital image of said virtual object to be sent to the screen (10), and said processing unit (3) is designed to adjust the position of said virtual camera (12).
12. The device according to any of claims 9 to 1 1, characterised in that it comprises adjusting means (17) for adjusting the position of said screen (10) and/or of said projector (9), said means (17) preferably being connected to said visor (2).
13. The device according to any of claims 9 to 12, wherein the visor (2) comprises at least one first screen (10) designed to be positioned substantially in front of one eye of a user; at least one second screen (14) designed to be positioned substantially in front of the other eye of the user; at least one first projector (9) of a first digital image towards said first screen (10), at least one second projector (13) of a second digital image substantially corresponding to said first digital image towards said second screen (14), at least one first optical system (1 1) for viewing said first digital image on a viewing area located at a predetemiined distance from the user and at least one second optical system (15) for viewing said second digital image on the viewing area located at a predetermined distance from the user, characterised in that said processing unit (3) is designed to adjust the position of said first digital image relative to said second digital image in such a way that said digital images are overlaid one on top of the other in the viewing area.
14. The device according to claim 13, wherein said processing unit (3) comprises a first virtual camera (12) for filming a virtual object generated by said processing unit (3) and generating said first digital image of said virtual object to be sent to said first screen (10), and a second virtual camera (16) for filming a virtual object generated by said processing unit (3) and generating said second digital image of said virtual object to be sent to said second screen (14), characterised in that said processing unit (3) is designed to adjust the position of one virtual camera (16) relative to the other (12).
PCT/IT2018/000048 2017-03-30 2018-03-29 Method and device for viewing augmented reality images WO2018179018A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102017000035014A IT201700035014A1 (en) 2017-03-30 2017-03-30 METHOD AND DEVICE FOR THE VISION OF INCREASED IMAGES
IT102017000035014 2017-03-30

Publications (1)

Publication Number Publication Date
WO2018179018A1 true WO2018179018A1 (en) 2018-10-04

Family

ID=59811728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IT2018/000048 WO2018179018A1 (en) 2017-03-30 2018-03-29 Method and device for viewing augmented reality images

Country Status (2)

Country Link
IT (1) IT201700035014A1 (en)
WO (1) WO2018179018A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120103A1 (en) * 2010-02-28 2012-05-17 Osterhout Group, Inc. Alignment control in an augmented reality headpiece
US20140268360A1 (en) * 2013-03-14 2014-09-18 Valve Corporation Head-mounted display
US20150103096A1 (en) * 2012-05-30 2015-04-16 Pioneer Corporation Display device, head mount display, calibration method, calibration program and recording medium
EP3009915A1 (en) * 2014-10-15 2016-04-20 Samsung Electronics Co., Ltd. Method and apparatus for processing screen using device
US20160140773A1 (en) * 2014-11-17 2016-05-19 Seiko Epson Corporation Head-mounted display device, method of controlling head-mounted display device, and computer program
US20160225191A1 (en) * 2015-02-02 2016-08-04 Daqri, Llc Head mounted display calibration
US9599825B1 (en) * 2015-09-30 2017-03-21 Daqri, Llc Visual indicator for transparent display alignment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120120103A1 (en) * 2010-02-28 2012-05-17 Osterhout Group, Inc. Alignment control in an augmented reality headpiece
US20150103096A1 (en) * 2012-05-30 2015-04-16 Pioneer Corporation Display device, head mount display, calibration method, calibration program and recording medium
US20140268360A1 (en) * 2013-03-14 2014-09-18 Valve Corporation Head-mounted display
EP3009915A1 (en) * 2014-10-15 2016-04-20 Samsung Electronics Co., Ltd. Method and apparatus for processing screen using device
US20160140773A1 (en) * 2014-11-17 2016-05-19 Seiko Epson Corporation Head-mounted display device, method of controlling head-mounted display device, and computer program
US20160225191A1 (en) * 2015-02-02 2016-08-04 Daqri, Llc Head mounted display calibration
US9599825B1 (en) * 2015-09-30 2017-03-21 Daqri, Llc Visual indicator for transparent display alignment

Also Published As

Publication number Publication date
IT201700035014A1 (en) 2018-09-30

Similar Documents

Publication Publication Date Title
US10397539B2 (en) Compensating 3D stereoscopic imagery
CN110234000B (en) Teleconferencing method and telecommunication system
KR101313740B1 (en) OSMU( One Source Multi Use)-type Stereoscopic Camera and Method of Making Stereoscopic Video Content thereof
US7092003B1 (en) 3-D imaging arrangements
US20180160048A1 (en) Imaging system and method of producing images for display apparatus
WO2016203654A1 (en) Head mounted display device and method for providing visual aid using same
US20130293447A1 (en) Head-mountable display system
US9905143B1 (en) Display apparatus and method of displaying using image renderers and optical combiners
TWI507729B (en) Eye-accommodation-aware head mounted visual assistant system and imaging method thereof
JP2011071898A (en) Stereoscopic video display device and stereoscopic video display method
US11962746B2 (en) Wide-angle stereoscopic vision with cameras having different parameters
JPH08317429A (en) Stereoscopic electronic zoom device and stereoscopic picture quality controller
CN108632599B (en) Display control system and display control method of VR image
CN106842599B (en) 3D visual imaging method and glasses for realizing 3D visual imaging
KR100751290B1 (en) Image system for head mounted display
JP2015228543A (en) Electronic apparatus and display processing method
JPH06235885A (en) Stereoscopic picture display device
US20130044109A1 (en) Control method and apparatus for stereoscopic display
WO2018179018A1 (en) Method and device for viewing augmented reality images
JPH08191462A (en) Stereoscopic video reproducing device and stereoscopic image pickup device
KR102242923B1 (en) Alignment device for stereoscopic camera and method thereof
Gurrieri Improvements in the visualization of stereoscopic 3D imagery
CN108234990B (en) Stereoscopic display device and stereoscopic display method
KR101376734B1 (en) OSMU( One Source Multi Use)-type Stereoscopic Camera and Method of Making Stereoscopic Video Content thereof
US11119300B2 (en) Stereo microscope with single objective

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18727060

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.01.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18727060

Country of ref document: EP

Kind code of ref document: A1