US11956415B2 - Head mounted display apparatus - Google Patents

Head mounted display apparatus Download PDF

Info

Publication number
US11956415B2
US11956415B2 US18/150,431 US202318150431A US11956415B2 US 11956415 B2 US11956415 B2 US 11956415B2 US 202318150431 A US202318150431 A US 202318150431A US 11956415 B2 US11956415 B2 US 11956415B2
Authority
US
United States
Prior art keywords
image
lens
camera
information
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US18/150,431
Other versions
US20230156176A1 (en
Inventor
Osamu Kawamae
Kenichi Shimada
Manabu Katsuki
Megumi Kurachi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxell Ltd
Original Assignee
Maxell Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maxell Ltd filed Critical Maxell Ltd
Priority to US18/150,431 priority Critical patent/US11956415B2/en
Publication of US20230156176A1 publication Critical patent/US20230156176A1/en
Assigned to MAXELL, LTD. reassignment MAXELL, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHIMADA, KENICHI, KAWAMAE, OSAMU, KATSUKI, MANABU, KURACHI, MEGUMI
Application granted granted Critical
Publication of US11956415B2 publication Critical patent/US11956415B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0129Head-up displays characterised by optical features comprising devices for correcting parallax
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the present invention relates to a head mounted display apparatus, and particularly relates to a technology effective for expressing the faithful occlusion in the AR (Augmented Reality) display.
  • the AR is a technology for displaying an image created by a computer, a portable computer, or the like so as to be superimposed on an entire screen or a part of a real image.
  • the AR display in order to provide the user with an AR image that does not give a sense of discomfort, it is important to faithfully express the above-mentioned occlusion by processing a three-dimensional video or the like.
  • Patent Document 1 US Patent Application Publication No. 2012/206452
  • An object of the present invention is to provide a technology capable of faithfully expressing the occlusion even in the binocular vision in the AR display by a head mounted display apparatus or the like.
  • a typical head mounted display apparatus includes a first lens, a second lens, a first camera, a second camera, and an information processor.
  • the first lens displays a CG image for a right eye.
  • the second lens displays a CG image for a left eye.
  • the first camera captures an image for the right eye.
  • the second camera captures an image for the left eye.
  • the information processor generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the first camera and the second camera, projects the generated CG image for the right eye onto the first lens, and projects the generated CG image for the left eye onto the second lens.
  • a center of a lens of the first camera is provided at the same position as a center of the first lens (center of a pupil of a wearer).
  • a center of a lens of the second camera is provided at the same position as a center of the second lens (center of a pupil of a wearer).
  • the information processor includes a first information generator, a second information generator, a first shielded region calculator, a second shielded region calculator, an image generator, and a display unit.
  • the first information generator generates occlusion information indicating a shielding relationship between the CG image to be displayed on the first lens and a real environment based on the images captured by the first camera and the second camera.
  • the second information generator generates occlusion information indicating a shielding relationship between the CG image to be displayed on the second lens and the real environment based on the images captured by the first camera and the second camera.
  • the first shielded region calculator calculates a shielded region in which the CG image displayed on the first lens is shielded by an object when the CG image is seen by the right eye, based on the occlusion information generated by the first information generator.
  • the second shielded region calculator calculates a shielded region in which the CG image displayed on the second lens is shielded by the object when the CG image is seen by the left eye, based on the occlusion information generated by the second information generator.
  • the image generator generates the CG image for the right eye in which the shielded region calculated by the first shielded region calculator is not displayed and the CG image for the left eye in which the shielded region calculated by the second shielded region calculator is not displayed, based on CG image generation data for generating the CG image.
  • the display unit projects the CG image for the right eye and the CG image for the left eye generated by the image generator onto the first lens and the second lens, respectively.
  • FIG. 1 is an explanatory diagram showing an example of the configuration in a head mounted display apparatus according to an embodiment
  • FIG. 2 is an explanatory diagram showing an example of the configuration of a control processor in the head mounted display apparatus in FIG. 1 ;
  • FIG. 3 is an explanatory diagram showing an example of the configuration of an information processor in the control processor in FIG. 2 ;
  • FIG. 4 is an explanatory diagram showing an example of an AR display by the head mounted display apparatus in FIG. 1 ;
  • FIG. 5 is an explanatory diagram showing an example of image processing in which the binocular vision is taken into consideration
  • FIG. 6 is an explanatory diagram showing an example of the convergence angle
  • FIG. 7 is an explanatory diagram showing an example of the difference in vision between the right eye and the left eye.
  • FIG. 1 is an explanatory diagram showing an example of the configuration in a head mounted display apparatus 10 according to the embodiment.
  • the head mounted display apparatus 10 includes a chassis 11 , lenses 12 , 13 , cameras 14 , 15 , and a control processor 16 .
  • the chassis 11 constitutes a spectacle frame.
  • the lens 12 which is the first lens and the lens 13 which is the second lens are each fixed to a rim of the chassis 11 .
  • the lenses 12 and 13 fixed to the rim are provided so as to respectively correspond to the left and right eyes of the user who uses the head mounted display apparatus 10 .
  • the camera 14 which is the first camera is provided on the upper part of the rim which fixes the lens 12
  • the camera 15 which is the second camera is provided on the upper part of the rim which fixes the lens 13 .
  • These cameras 14 and 15 are, for example, stereo cameras.
  • the stereo cameras capture an object by using the parallax of two camera.
  • the camera 14 is a camera corresponding to the right eye of the user
  • the camera 15 is a camera corresponding to the left eye of the user. These cameras 14 and 15 take images, respectively.
  • the center of the lens of the camera 14 is provided at a position substantially the same as the center of the lens 12 .
  • the center of the lens of the camera 14 is substantially the same as the center of the pupil of the right eye of the user.
  • the center of the lens of the camera 15 is provided at a position substantially the same as the center of the lens 13 .
  • the center of the lens of the camera 15 is substantially the same as the center of the pupil of the right eye of the user.
  • FIG. 2 is an explanatory diagram showing an example of the configuration of the control processor 16 in the head mounted display apparatus 10 in FIG. 1 .
  • the control processor 16 includes an operation input unit 17 , a controller 18 , an information processor 19 , and a communication interface 30 .
  • the operation input unit 17 , the controller 18 , the information processor 19 , the communication interface 30 , and the cameras 14 and 15 in FIG. 1 are connected to each other by a bus 31 .
  • the operation input unit 17 is a user interface including, for example, a touch sensor.
  • the touch sensor is composed of, for example, a capacitive touch panel that electrostatically detects the position of a user's finger or the like in contact with the touch sensor.
  • the controller 18 is responsible for the control in the head mounted display apparatus 10 .
  • the information processor 19 generates a CG image and displays it on the lenses 12 and 13 .
  • the information processor 19 is provided in, for example, a bridge portion of the chassis 11 .
  • the arrangement of the information processor 16 is not particularly limited, and the information processor 16 may be provided in, for example, a temple portion of the chassis 11 .
  • the CG image generated by the information processor 16 is projected onto the lenses 12 and 13 .
  • the projected CG images are magnified and displayed by the lenses 12 and 13 , respectively.
  • the CG images displayed on the lenses 12 and 13 are superimposed on the real image to form an AR image.
  • the communication interface 30 performs wireless communication of the information or the like by, for example, Bluetooth (registered trademark) or an Internet line.
  • FIG. 3 is an explanatory diagram showing an example of the configuration of the information processor 19 in the control processor 16 in FIG. 2 .
  • the information processor 19 includes a depth information generator 20 , a shielded region calculator 21 , an AR image generator 22 , and an AR display unit 23 .
  • the depth information generator 20 that constitutes a first information generator and a second information generator calculates and stores the distance to the object, the convergence angle, and the like.
  • the depth information generator 20 includes a parallax matching unit 25 and a depth information storage 26 .
  • the parallax matching unit 25 calculates the object distance and the convergence angle of all the objects in the images taken by the cameras 14 and 15 in FIG. 1 .
  • the images taken by the cameras 14 and 15 are input via the bus 31 .
  • the object distance is the distance from the cameras 14 and 15 to the object, in other words, the depth information of the object.
  • the convergence angle is an angle formed by the sight lines from both eyes at the object to be seen. This convergence angle can be calculated from, for example, the object distance and the position information of the object on the image.
  • the depth information storage 26 is composed of a semiconductor memory exemplified by, for example, a flash memory, and stores depth information for a right eye (right-eye depth information) and depth information for a left eye (left-eye depth information).
  • the right-eye depth information is the information obtained by adding the object distance and the convergence angle calculated by the parallax matching unit 25 to the image taken by the camera 14 .
  • the left-eye depth information is the information obtained by adding the object distance and the convergence angle calculated by the parallax matching unit 25 to the image taken by the camera 15 .
  • the shielded region calculator 21 that constitutes a first shielded region calculator and a second shielded region calculator calculates a shielded region of the CG image for a right eye (right-eye CG image) and a shielded region of the CG image for a left eye (left-eye CG image) based on the right-eye depth information and the left-eye depth information stored in the depth information storage 26 and the distance information of the CG image to be displayed.
  • the shielded region is the region in which a part of the CG image is not displayed in order to express the occlusion.
  • the AR image generator 22 which is an image generator generates the right-eye CG image and the left-eye CG image based on the shielded region of the right-eye CG image and the shielded region of the left-eye CG image calculated by the shielded region calculator 21 and the image generation data.
  • the distance information of the CG image described above is the information indicating the distance and position of the CG image to be displayed.
  • the image generation data is the data necessary for generating the CG image to be displayed, for example, data such as the shape and color scheme of the CG image.
  • the distance information and image generation data are provided by, for example, an application input from the outside.
  • the application is acquired through, for example, the communication interface 30 in the control processor 16 .
  • the AR display unit 23 which is the display is an optical system that projects and draws the right-eye CG image and the left-eye CG image generated by the AR image generator 22 on the lenses 12 and 13 , respectively.
  • the cameras 14 and 15 take images.
  • the images taken by the cameras 14 and 15 are input to the parallax matching unit 25 in the depth information generator 20 .
  • the parallax matching unit 25 calculates the object distance and the convergence angle from the images taken by the cameras 14 and 15 .
  • the object distance is obtained by calculating the difference in position between the images taken by the cameras 14 and 15 by means of triangulation or the like by the parallax matching unit 25 .
  • the convergence angle is calculated by the parallax matching unit 25 based on the calculated object distance and the position of the object. These calculations are performed for each object in the images taken by the cameras 14 and 15 .
  • the parallax matching unit 25 adds the object distance and the convergence angle calculated from the image of the camera 14 to the image taken by the camera 14 . Similarly, the parallax matching unit 25 adds the object distance and the convergence angle calculated from the image of the camera 15 to the image taken by the camera 15 .
  • the parallax matching unit 25 stores the image of the camera 14 to which the distance and the convergence angle have been added in the depth information storage 26 as the right-eye depth information described above, and stores the image of the camera 15 to which the distance and the convergence angle have been added in the depth information storage 26 as the left-eye depth information described above.
  • the shielded region calculator 21 acquires the distance to the object and the convergence angle from the right-eye depth information and the left-eye depth information stored in the depth information storage 26 . Thereafter, based on the acquired distance to the object, the convergence angle, and the distance information of the CG image provided from the application, the shielded region calculator 21 calculates the region where the CG image displayed for the right eye is shielded by the real object and the region where the CG image displayed for the left eye is shielded by the real object.
  • the information of the shielded region calculated by the shielded region calculator 21 is output to the AR image generator 22 .
  • the AR image generator 22 generates the CG image to be superimposed on a real scene, based on the image generation data provided from the application.
  • the AR image generator 22 generates the CG image in which the region expressing the occlusion is not displayed, based on the information of the shield region calculated by the shielded region calculator 21 .
  • the right-eye CG image and the left-eye CG image generated by the AR image generator 22 are output to the AR display unit 23 .
  • the AR display unit 23 projects the input right-eye CG image onto the lens 12 and the input left-eye CG onto the lens 13 , respectively.
  • the right-eye CG image in which the occlusion optimum for the field of view of the right eye is expressed and the left-eye CG image in which the occlusion optimum for the field of view of the left eye is expressed are displayed on the lenses 12 and 13 , respectively.
  • the AR image in which faithful occlusion is expressed is displayed on the lenses 12 and 13 , respectively.
  • FIG. 4 is an explanatory diagram showing an example of an AR display by the head mounted display apparatus 10 in FIG. 1 .
  • FIG. 5 is an explanatory diagram showing an example of image processing in which the binocular vision is taken into consideration.
  • FIG. 6 is an explanatory diagram showing an example of the convergence angle.
  • FIG. 7 is an explanatory diagram showing an example of the difference in vision between the right eye and the left eye.
  • FIG. 4 shows an example of the AR display in the state where there is a real object 201 behind a real object 200 and a CG image 50 is placed on the real object 201 .
  • the right eye and the left eye of the human are separated by about 6 cm, and thus the sceneries captured by the left and right eyes are slightly different from each other. At this time, the amount of deviation differs depending on the distance of the object. Therefore, when performing the AR display in consideration of binocular vision, it is necessary to grasp the visions of the real object captured by the left and right eyes of the user.
  • the convergence angle ⁇ 2 when an object 203 at a close position is seen is larger than the convergence angle ⁇ 1 when the object 203 at a distant position is seen.
  • the region of the object 203 that each of the right eye and the left eye sees also changes, and thus the change of the shielded region of the CG image caused by the difference in the visible region due to the convergence angle also needs to be taken into consideration.
  • the images are taken by the camera 14 provided such that the center of the lens is substantially the same as the center of the user's right eye and the camera 15 provided such that the center of the lens is substantially the same as the center of the user's left eye. Consequently, it is possible to capture the images that are almost the same as the visions of the real object by the left and right eyes of the user.
  • the cameras 14 and 15 capture the real objects 200 and 201 in FIG. 4 such that the images substantially similar to the images captured by the left and right eyes are acquired.
  • the real object 201 is seen such that the left side of the real object 201 is mainly hidden by the real object 200 .
  • the real object 201 is seen such that the right side of the real object 201 is mainly hidden by the real object 200 .
  • the images taken by the cameras 14 and 15 are also the same as the images captured by the left and right eyes of the human, and as shown in the upper part of FIG. 5 , the left side of the real object 201 is mainly hidden by the real object 200 in the image captured by the camera 14 for the right eye, and the right side of the real object 201 is mainly hidden by the real object 200 in the image captured by the camera 15 for the left eye.
  • the shielded region of the CG image 50 to be superimposed is calculated from the image acquired by the camera 14 and the shielded region of the CG image 50 to be superimposed is calculated from the image captured by the camera 15 , and the right-eye CG image and the left-eye CG image are individually generated and displayed.
  • the user can view the AR image in which the faithful occlusion is expressed.
  • the CG images may be displayed on the lenses 12 and 13 after the resolution, in other words, the blur amount of the CG image is adjusted in accordance with the distance information of the CG image provided from the application. This makes it possible to display a more natural CG image.
  • the depth information storage 26 stores the blur amount information including the distance of the CG image and the blur amount associated with the distance.
  • the AR image generator 22 acquires the distance of the CG image based on the distance information of the CG image provided from the application. Then, the AR image generator 22 searches for the blur amount information stored in the depth information storage 26 , and extracts the blur amount that matches or is close to the blur amount corresponding to the acquired distance.
  • the AR image generator 22 performs the blurring process to the CG image based on the extracted blur amount.
  • the blur information indicates, for example, the degree of blur of the outline of the CG image, and the process of blurring the outline of the CG image is performed based on the blur information.
  • the CG image can be blurred in the same way as in the distant view, the sense of discomfort such as the CG image being clearly displayed even though it is a CG image displayed at the distance can be eliminated, so that a more natural AR image can be provided to the user.
  • the CG image by generating the CG image using the blur amount information in which the size of the CG image and the like are taken into consideration instead of determining the blur amount using only the distance information of the CG image, it is possible to provide an AR image having a natural blur that fits in a distant view as compared with the case of using only the distance information.
  • the blur amount information is the information having the display distance and size of the CG image and the blur amount associated with the distance and size of the CG image.
  • the AR image generator 22 acquires the distance of the CG image based on the distance information of the CG image provided from the application, and similarly acquires the size information of the CG image based on the image generation data of the CG image provided from the application.
  • the AR image generator 22 searches for the blur amount information stored in the depth information storage 26 , and extracts the blur amount that matches or is close to the blur amount corresponding to the acquired distance and size. Then, based on the extracted blur information, the AR image generator 22 performs a process of blending the outline of the CG image.

Abstract

The occlusion is faithfully expressed even in the binocular vision in the AR display by a head mounted display apparatus or the like. A head mounted display apparatus 10 includes a lens, a lens, a camera, a camera, and a control processor. A CG image for a right eye is displayed on the lens. A CG image for a left eye is displayed on the lens. The camera captures an image for the right eye. The camera captures an image for the left eye. The control processor generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the cameras and projects the generated CG image for the right eye and CG image for the left eye onto the lenses and. A center of a lens of the camera is provided at the same position as a center of the lens. A center of a lens of the camera is provided at the same position as a center of the lens.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application is a Continuation of U.S. patent application Ser. No. 17/299,114, filed Jun. 2, 2021, which is the U.S. National Phase under 35 U.S.C. § 371 of International Application No. PCT/JP2018/044569, filed on Dec. 4, 2018, the entire contents are hereby incorporated by reference.
TECHNICAL FIELD
The present invention relates to a head mounted display apparatus, and particularly relates to a technology effective for expressing the faithful occlusion in the AR (Augmented Reality) display.
BACKGROUND ART
In recent years, various services using AR have been proposed. The AR is a technology for displaying an image created by a computer, a portable computer, or the like so as to be superimposed on an entire screen or a part of a real image.
In a three-dimensional space, there is a front-back relationship in addition to an up-down relationship and a left-right relationship. Therefore, a state in which an object on the far side is hidden behind an object on the near side to be invisible, that is, the occlusion occurs.
In the AR display, in order to provide the user with an AR image that does not give a sense of discomfort, it is important to faithfully express the above-mentioned occlusion by processing a three-dimensional video or the like.
As an AR image processing technology that takes this kind of occlusion into consideration, for example, a technology in which a CG (Computer Graphics) image region hidden behind a real object when expressing the occlusion is cut out in an elliptical shape to perform occlusion processing has been known (see, for example, Patent Document 1).
RELATED ART DOCUMENTS Patent Documents
Patent Document 1: US Patent Application Publication No. 2012/206452
SUMMARY OF THE INVENTION Problems to be Solved by the Invention
In the occlusion processing described above, for example, in the case of single vision that displays the AR on a smartphone or tablet, it is only necessary to simply express the occlusion in accordance with the distance relationship between the real object and the CG image.
However, in the case of a head mounted display apparatus using binocular vision such as AR glasses, in order to faithfully express the occlusion in AR display, the parallax and convergence angle of both eyes need to be taken into consideration, and there is a problem that the AR image becomes unnatural if only the occlusion processing for single vision is simply applied.
Further, in the processing technology for the occlusion in Patent Document 1 described above, since the processing in which the difference in vision due to the binocular parallax, convergence angle, and the like of the user is taken into consideration is not performed and the CG image region is cut out in an elliptical shape instead of cutting out the CG image region hidden behind the real object along the outline of the real object, there is a problem that the display in the border between the real object and the CG image becomes unnatural.
An object of the present invention is to provide a technology capable of faithfully expressing the occlusion even in the binocular vision in the AR display by a head mounted display apparatus or the like.
The above and other objects and novel features of the present invention will become apparent from the description of this specification and the accompanying drawings.
Means for Solving the Problems
The outline of the typical invention disclosed in this application will be briefly described as follows.
Namely, a typical head mounted display apparatus includes a first lens, a second lens, a first camera, a second camera, and an information processor. The first lens displays a CG image for a right eye. The second lens displays a CG image for a left eye. The first camera captures an image for the right eye. The second camera captures an image for the left eye.
The information processor generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the first camera and the second camera, projects the generated CG image for the right eye onto the first lens, and projects the generated CG image for the left eye onto the second lens.
Further, a center of a lens of the first camera is provided at the same position as a center of the first lens (center of a pupil of a wearer). A center of a lens of the second camera is provided at the same position as a center of the second lens (center of a pupil of a wearer).
In particular, the information processor includes a first information generator, a second information generator, a first shielded region calculator, a second shielded region calculator, an image generator, and a display unit.
The first information generator generates occlusion information indicating a shielding relationship between the CG image to be displayed on the first lens and a real environment based on the images captured by the first camera and the second camera. The second information generator generates occlusion information indicating a shielding relationship between the CG image to be displayed on the second lens and the real environment based on the images captured by the first camera and the second camera.
The first shielded region calculator calculates a shielded region in which the CG image displayed on the first lens is shielded by an object when the CG image is seen by the right eye, based on the occlusion information generated by the first information generator. The second shielded region calculator calculates a shielded region in which the CG image displayed on the second lens is shielded by the object when the CG image is seen by the left eye, based on the occlusion information generated by the second information generator.
The image generator generates the CG image for the right eye in which the shielded region calculated by the first shielded region calculator is not displayed and the CG image for the left eye in which the shielded region calculated by the second shielded region calculator is not displayed, based on CG image generation data for generating the CG image. The display unit projects the CG image for the right eye and the CG image for the left eye generated by the image generator onto the first lens and the second lens, respectively.
Effects of the Invention
The effect obtained by the typical invention disclosed in this application will be briefly described as follows.
It is possible to faithfully express the occlusion in the AR display of the binocular vision.
BRIEF DESCRIPTIONS OF THE DRAWINGS
FIG. 1 is an explanatory diagram showing an example of the configuration in a head mounted display apparatus according to an embodiment;
FIG. 2 is an explanatory diagram showing an example of the configuration of a control processor in the head mounted display apparatus in FIG. 1 ;
FIG. 3 is an explanatory diagram showing an example of the configuration of an information processor in the control processor in FIG. 2 ;
FIG. 4 is an explanatory diagram showing an example of an AR display by the head mounted display apparatus in FIG. 1 ;
FIG. 5 is an explanatory diagram showing an example of image processing in which the binocular vision is taken into consideration;
FIG. 6 is an explanatory diagram showing an example of the convergence angle; and
FIG. 7 is an explanatory diagram showing an example of the difference in vision between the right eye and the left eye.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
The same members are denoted by the same reference signs in principle throughout the drawings for describing the embodiment, and the repetitive description thereof will be omitted.
The embodiment will be described in detail below.
<Configuration Example of Head Mounted Display Apparatus>
FIG. 1 is an explanatory diagram showing an example of the configuration in a head mounted display apparatus 10 according to the embodiment.
As shown in FIG. 1 , the head mounted display apparatus 10 includes a chassis 11, lenses 12, 13, cameras 14, 15, and a control processor 16. The chassis 11 constitutes a spectacle frame.
The lens 12 which is the first lens and the lens 13 which is the second lens are each fixed to a rim of the chassis 11. The lenses 12 and 13 fixed to the rim are provided so as to respectively correspond to the left and right eyes of the user who uses the head mounted display apparatus 10.
The camera 14 which is the first camera is provided on the upper part of the rim which fixes the lens 12, and the camera 15 which is the second camera is provided on the upper part of the rim which fixes the lens 13. These cameras 14 and 15 are, for example, stereo cameras. The stereo cameras capture an object by using the parallax of two camera.
The camera 14 is a camera corresponding to the right eye of the user, and the camera 15 is a camera corresponding to the left eye of the user. These cameras 14 and 15 take images, respectively.
Also, the center of the lens of the camera 14 is provided at a position substantially the same as the center of the lens 12. In other words, the center of the lens of the camera 14 is substantially the same as the center of the pupil of the right eye of the user.
Similarly, the center of the lens of the camera 15 is provided at a position substantially the same as the center of the lens 13. In other words, the center of the lens of the camera 15 is substantially the same as the center of the pupil of the right eye of the user.
FIG. 2 is an explanatory diagram showing an example of the configuration of the control processor 16 in the head mounted display apparatus 10 in FIG. 1 .
As shown in FIG. 2 , the control processor 16 includes an operation input unit 17, a controller 18, an information processor 19, and a communication interface 30. The operation input unit 17, the controller 18, the information processor 19, the communication interface 30, and the cameras 14 and 15 in FIG. 1 are connected to each other by a bus 31.
The operation input unit 17 is a user interface including, for example, a touch sensor. The touch sensor is composed of, for example, a capacitive touch panel that electrostatically detects the position of a user's finger or the like in contact with the touch sensor.
The controller 18 is responsible for the control in the head mounted display apparatus 10. The information processor 19 generates a CG image and displays it on the lenses 12 and 13. The information processor 19 is provided in, for example, a bridge portion of the chassis 11. The arrangement of the information processor 16 is not particularly limited, and the information processor 16 may be provided in, for example, a temple portion of the chassis 11.
The CG image generated by the information processor 16 is projected onto the lenses 12 and 13. The projected CG images are magnified and displayed by the lenses 12 and 13, respectively. Then, the CG images displayed on the lenses 12 and 13 are superimposed on the real image to form an AR image.
The communication interface 30 performs wireless communication of the information or the like by, for example, Bluetooth (registered trademark) or an Internet line.
<Configuration Example of Information Processor>
FIG. 3 is an explanatory diagram showing an example of the configuration of the information processor 19 in the control processor 16 in FIG. 2 .
As shown in FIG. 3 , the information processor 19 includes a depth information generator 20, a shielded region calculator 21, an AR image generator 22, and an AR display unit 23.
The depth information generator 20 that constitutes a first information generator and a second information generator calculates and stores the distance to the object, the convergence angle, and the like. The depth information generator 20 includes a parallax matching unit 25 and a depth information storage 26.
The parallax matching unit 25 calculates the object distance and the convergence angle of all the objects in the images taken by the cameras 14 and 15 in FIG. 1 . The images taken by the cameras 14 and 15 are input via the bus 31.
The object distance is the distance from the cameras 14 and 15 to the object, in other words, the depth information of the object. The convergence angle is an angle formed by the sight lines from both eyes at the object to be seen. This convergence angle can be calculated from, for example, the object distance and the position information of the object on the image.
The depth information storage 26 is composed of a semiconductor memory exemplified by, for example, a flash memory, and stores depth information for a right eye (right-eye depth information) and depth information for a left eye (left-eye depth information). The right-eye depth information is the information obtained by adding the object distance and the convergence angle calculated by the parallax matching unit 25 to the image taken by the camera 14. The left-eye depth information is the information obtained by adding the object distance and the convergence angle calculated by the parallax matching unit 25 to the image taken by the camera 15.
The shielded region calculator 21 that constitutes a first shielded region calculator and a second shielded region calculator calculates a shielded region of the CG image for a right eye (right-eye CG image) and a shielded region of the CG image for a left eye (left-eye CG image) based on the right-eye depth information and the left-eye depth information stored in the depth information storage 26 and the distance information of the CG image to be displayed. The shielded region is the region in which a part of the CG image is not displayed in order to express the occlusion.
The AR image generator 22 which is an image generator generates the right-eye CG image and the left-eye CG image based on the shielded region of the right-eye CG image and the shielded region of the left-eye CG image calculated by the shielded region calculator 21 and the image generation data.
The distance information of the CG image described above is the information indicating the distance and position of the CG image to be displayed. The image generation data is the data necessary for generating the CG image to be displayed, for example, data such as the shape and color scheme of the CG image.
The distance information and image generation data are provided by, for example, an application input from the outside. The application is acquired through, for example, the communication interface 30 in the control processor 16. Alternatively, it is also possible to provide a memory (not shown) in the information processor 19 and store the application input from the outside in the memory in advance.
The AR display unit 23 which is the display is an optical system that projects and draws the right-eye CG image and the left-eye CG image generated by the AR image generator 22 on the lenses 12 and 13, respectively.
<Operation Example of Head Mounted Display Apparatus>
Next, the operation of the head mounted display apparatus 10 will be described.
First, the cameras 14 and 15 take images. The images taken by the cameras 14 and 15 are input to the parallax matching unit 25 in the depth information generator 20. The parallax matching unit 25 calculates the object distance and the convergence angle from the images taken by the cameras 14 and 15.
The object distance is obtained by calculating the difference in position between the images taken by the cameras 14 and 15 by means of triangulation or the like by the parallax matching unit 25. Alternatively, instead of calculating the distance to the object based on the images of the cameras 14 and 15 constituting the stereo camera, for example, it is also possible to newly provide a distance sensor (not shown) or the like in the information processor 19, and measure the distance to the object by the distance sensor.
The convergence angle is calculated by the parallax matching unit 25 based on the calculated object distance and the position of the object. These calculations are performed for each object in the images taken by the cameras 14 and 15.
Thereafter, the parallax matching unit 25 adds the object distance and the convergence angle calculated from the image of the camera 14 to the image taken by the camera 14. Similarly, the parallax matching unit 25 adds the object distance and the convergence angle calculated from the image of the camera 15 to the image taken by the camera 15.
Then, the parallax matching unit 25 stores the image of the camera 14 to which the distance and the convergence angle have been added in the depth information storage 26 as the right-eye depth information described above, and stores the image of the camera 15 to which the distance and the convergence angle have been added in the depth information storage 26 as the left-eye depth information described above.
The shielded region calculator 21 acquires the distance to the object and the convergence angle from the right-eye depth information and the left-eye depth information stored in the depth information storage 26. Thereafter, based on the acquired distance to the object, the convergence angle, and the distance information of the CG image provided from the application, the shielded region calculator 21 calculates the region where the CG image displayed for the right eye is shielded by the real object and the region where the CG image displayed for the left eye is shielded by the real object.
The information of the shielded region calculated by the shielded region calculator 21 is output to the AR image generator 22. The AR image generator 22 generates the CG image to be superimposed on a real scene, based on the image generation data provided from the application.
At that time, the AR image generator 22 generates the CG image in which the region expressing the occlusion is not displayed, based on the information of the shield region calculated by the shielded region calculator 21.
The right-eye CG image and the left-eye CG image generated by the AR image generator 22 are output to the AR display unit 23. The AR display unit 23 projects the input right-eye CG image onto the lens 12 and the input left-eye CG onto the lens 13, respectively.
Consequently, the right-eye CG image in which the occlusion optimum for the field of view of the right eye is expressed and the left-eye CG image in which the occlusion optimum for the field of view of the left eye is expressed are displayed on the lenses 12 and 13, respectively. As a result, it is possible to provide the AR image in which faithful occlusion is expressed.
<Display of CG Image in Consideration of Binocular Vision>
FIG. 4 is an explanatory diagram showing an example of an AR display by the head mounted display apparatus 10 in FIG. 1 . FIG. 5 is an explanatory diagram showing an example of image processing in which the binocular vision is taken into consideration. FIG. 6 is an explanatory diagram showing an example of the convergence angle. FIG. 7 is an explanatory diagram showing an example of the difference in vision between the right eye and the left eye.
FIG. 4 shows an example of the AR display in the state where there is a real object 201 behind a real object 200 and a CG image 50 is placed on the real object 201.
The right eye and the left eye of the human are separated by about 6 cm, and thus the sceneries captured by the left and right eyes are slightly different from each other. At this time, the amount of deviation differs depending on the distance of the object. Therefore, when performing the AR display in consideration of binocular vision, it is necessary to grasp the visions of the real object captured by the left and right eyes of the user.
Further, as shown in FIG. 6 , with respect to the convergence angle, the convergence angle θ2 when an object 203 at a close position is seen is larger than the convergence angle θ1 when the object 203 at a distant position is seen. As shown in FIG. 7 , as the convergence angle changes, the region of the object 203 that each of the right eye and the left eye sees also changes, and thus the change of the shielded region of the CG image caused by the difference in the visible region due to the convergence angle also needs to be taken into consideration.
Thus, as described above, the images are taken by the camera 14 provided such that the center of the lens is substantially the same as the center of the user's right eye and the camera 15 provided such that the center of the lens is substantially the same as the center of the user's left eye. Consequently, it is possible to capture the images that are almost the same as the visions of the real object by the left and right eyes of the user.
As described above, when the user sees the real objects 200 and 201 in FIG. 4 , the way of hiding the real object 201 differs depending on the left and right eyes. Therefore, the cameras 14 and 15 capture the real objects 200 and 201 in FIG. 4 such that the images substantially similar to the images captured by the left and right eyes are acquired.
For example, when the right eye of the human captures the real objects 200 and 201 in FIG. 4 , the real object 201 is seen such that the left side of the real object 201 is mainly hidden by the real object 200. When the left eye captures it, the real object 201 is seen such that the right side of the real object 201 is mainly hidden by the real object 200.
The images taken by the cameras 14 and 15 are also the same as the images captured by the left and right eyes of the human, and as shown in the upper part of FIG. 5 , the left side of the real object 201 is mainly hidden by the real object 200 in the image captured by the camera 14 for the right eye, and the right side of the real object 201 is mainly hidden by the real object 200 in the image captured by the camera 15 for the left eye.
Then, as shown in the lower part of FIG. 5 , the shielded region of the CG image 50 to be superimposed is calculated from the image acquired by the camera 14 and the shielded region of the CG image 50 to be superimposed is calculated from the image captured by the camera 15, and the right-eye CG image and the left-eye CG image are individually generated and displayed.
In this way, by respectively generating and displaying the right-eye CG image and the left-eye CG image in consideration of the shielding relationship in the left and right visual fields, the user can view the AR image in which the faithful occlusion is expressed.
As described above, it is possible to provide a natural AR image that does not give a sense of discomfort to the user.
<Another Example of CG Image Generation>
Further, the CG images may be displayed on the lenses 12 and 13 after the resolution, in other words, the blur amount of the CG image is adjusted in accordance with the distance information of the CG image provided from the application. This makes it possible to display a more natural CG image.
In this case, the depth information storage 26 stores the blur amount information including the distance of the CG image and the blur amount associated with the distance. The AR image generator 22 acquires the distance of the CG image based on the distance information of the CG image provided from the application. Then, the AR image generator 22 searches for the blur amount information stored in the depth information storage 26, and extracts the blur amount that matches or is close to the blur amount corresponding to the acquired distance.
Thereafter, the AR image generator 22 performs the blurring process to the CG image based on the extracted blur amount. The blur information indicates, for example, the degree of blur of the outline of the CG image, and the process of blurring the outline of the CG image is performed based on the blur information.
In this manner, since the CG image can be blurred in the same way as in the distant view, the sense of discomfort such as the CG image being clearly displayed even though it is a CG image displayed at the distance can be eliminated, so that a more natural AR image can be provided to the user.
Further, by generating the CG image using the blur amount information in which the size of the CG image and the like are taken into consideration instead of determining the blur amount using only the distance information of the CG image, it is possible to provide an AR image having a natural blur that fits in a distant view as compared with the case of using only the distance information.
In this case, the blur amount information is the information having the display distance and size of the CG image and the blur amount associated with the distance and size of the CG image. The AR image generator 22 acquires the distance of the CG image based on the distance information of the CG image provided from the application, and similarly acquires the size information of the CG image based on the image generation data of the CG image provided from the application.
Then, the AR image generator 22 searches for the blur amount information stored in the depth information storage 26, and extracts the blur amount that matches or is close to the blur amount corresponding to the acquired distance and size. Then, based on the extracted blur information, the AR image generator 22 performs a process of blending the outline of the CG image.
In the foregoing, the present invention has been specifically described based the embodiment, but it is needless to say that the present invention is not limited to the embodiment described above and can be variously modified within the range not departing from the gist thereof.
REFERENCE SIGNS LIST
    • 10 head mounted display apparatus
    • 11 chassis
    • 12 lens
    • 13 lens
    • 14 camera
    • 15 camera
    • 16 control processor
    • 17 operation input unit
    • 18 controller
    • 19 information processor
    • 19 information processor
    • 20 information generator
    • 21 shielded region calculator
    • 22 AR image generator
    • 23 AR display unit
    • 25 parallax matching unit
    • 26 depth information storage
    • 30 communication interface
    • 50 CG image

Claims (5)

The invention claimed is:
1. A head mounted display apparatus comprising:
a first lens configured to display a CG image for a right eye;
a second lens configured to display a CG image for a left eye;
a first camera configured to capture an image for the right eye;
a second camera configured to capture an image for the left eye;
an information processor configured to generate the CG image for the right eye in which occlusion at a time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at a time of seeing by the left eye is expressed, based on the images captured by the first camera and the second camera, project the generated CG image for the right eye onto the first lens, and project the generated CG image for the left eye onto the second lens; and
a communication interface configured to acquire an application which provides image generation data used for generation of the CG image,
wherein the information processor includes:
a first information generator configured to generate occlusion information indicating a shielding relationship between the CG image to be displayed on the first lens and a real environment based on the images captured by the first camera and the second camera;
a second information generator configured to generate occlusion information indicating a shielding relationship between the CG image to be displayed on the second lens and the real environment based on the images captured by the first camera and the second camera;
a first shielded region calculator configured to calculate a shielded region in which the CG image displayed on the first lens is shielded by an object when the CG image is seen by the right eye, based on the occlusion information generated by the first information generator;
a second shielded region calculator configured to calculate a shielded region in which the CG image displayed on the second lens is shielded by the object when the CG image is seen by the left eye, based on the occlusion information generated by the second information generator;
an image generator configured to generate the CG image for the right eye in which the shielded region calculated by the first shielded region calculator is not displayed and the CG image for the left eye in which the shielded region calculated by the second shielded region calculator is not displayed, based on the CG image generation data provided from the application; and
a display configured to project the CG image for the right eye and the CG image for the left eye generated by the image generator onto the first lens and the second lens, respectively,
wherein the occlusion information generated by the first information generator and the second information generator includes a blur amount set in accordance with distance information provided from the application, and
wherein the image generator performs a process of blurring an outline of the CG image for the right eye in which the shielded region is not displayed and an outline of the CG image for the left eye in which the shielded region is not displayed, based on the blur amount acquired from the occlusion information.
2. The head mounted display apparatus according to claim 1,
wherein the image generation data is data indicating a shape, color or size of the CG image.
3. The head mounted display apparatus according to claim 1,
wherein the occlusion information generated by the first information generator and the second information generator includes an object distance indicating a distance from the first camera and the second camera to the object and a convergence angle which is an angle at which sight lines from both eyes intersect at the object when the object is seen by both eyes.
4. The head mounted display apparatus according to claim 3,
wherein the image generator calculates the shielded regions for the CG image for the right eye and the CG image for the left eye based on the object distance and the convergence angle included in the occlusion information.
5. The head mounted display apparatus according to claim 1,
wherein a center of a lens of the first camera is provided at the same position as a center of the first lens, and
wherein a center of a lens of the second camera is provided at the same position as a center of the second lens.
US18/150,431 2018-12-04 2023-01-05 Head mounted display apparatus Active US11956415B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/150,431 US11956415B2 (en) 2018-12-04 2023-01-05 Head mounted display apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/JP2018/044569 WO2020115815A1 (en) 2018-12-04 2018-12-04 Head-mounted display device
US202117299114A 2021-06-02 2021-06-02
US18/150,431 US11956415B2 (en) 2018-12-04 2023-01-05 Head mounted display apparatus

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2018/044569 Continuation WO2020115815A1 (en) 2018-12-04 2018-12-04 Head-mounted display device
US17/299,114 Continuation US11582441B2 (en) 2018-12-04 2018-12-04 Head mounted display apparatus

Publications (2)

Publication Number Publication Date
US20230156176A1 US20230156176A1 (en) 2023-05-18
US11956415B2 true US11956415B2 (en) 2024-04-09

Family

ID=70975335

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/299,114 Active US11582441B2 (en) 2018-12-04 2018-12-04 Head mounted display apparatus
US18/150,431 Active US11956415B2 (en) 2018-12-04 2023-01-05 Head mounted display apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/299,114 Active US11582441B2 (en) 2018-12-04 2018-12-04 Head mounted display apparatus

Country Status (4)

Country Link
US (2) US11582441B2 (en)
JP (2) JP7148634B2 (en)
CN (1) CN113170090A (en)
WO (1) WO2020115815A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020115815A1 (en) * 2018-12-04 2020-06-11 マクセル株式会社 Head-mounted display device
CN116583763A (en) * 2020-12-10 2023-08-11 麦克赛尔株式会社 Portable terminal and electronic glasses
US20240062483A1 (en) * 2022-08-22 2024-02-22 Samsung Electronics Co., Ltd. Method and device for direct passthrough in video see-through (vst) augmented reality (ar)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10112831A (en) 1996-10-07 1998-04-28 Minolta Co Ltd Display method and display device for real space image and virtual space image
JPH10327433A (en) 1997-05-23 1998-12-08 Minolta Co Ltd Display device for composted image
US20060170652A1 (en) 2005-01-31 2006-08-03 Canon Kabushiki Kaisha System, image processing apparatus, and information processing method
JP2006209664A (en) 2005-01-31 2006-08-10 Canon Inc System, image processor and image processing method
US20120206452A1 (en) 2010-10-15 2012-08-16 Geisner Kevin A Realistic occlusion for a head mounted augmented reality display
US20130011045A1 (en) * 2011-07-07 2013-01-10 Samsung Electronics Co., Ltd. Apparatus and method for generating three-dimensional (3d) zoom image of stereo camera
US20130141419A1 (en) 2011-12-01 2013-06-06 Brian Mount Augmented reality with realistic occlusion
US20140184496A1 (en) * 2013-01-03 2014-07-03 Meta Company Extramissive spatial imaging digital eye glass apparatuses, methods and systems for virtual or augmediated vision, manipulation, creation, or interaction with objects, materials, or other entities
US20140333665A1 (en) * 2013-05-10 2014-11-13 Roger Sebastian Sylvan Calibration of eye location
US20150156481A1 (en) * 2009-03-19 2015-06-04 Real Time Companies Heads up display (hud) sensor system
JP2015158862A (en) 2014-02-25 2015-09-03 株式会社ソニー・コンピュータエンタテインメント Image generation device, image generation method, program, and computer readable information storage medium
US20150356785A1 (en) * 2014-06-06 2015-12-10 Canon Kabushiki Kaisha Image synthesis method and image synthesis apparatus
US20180275410A1 (en) 2017-03-22 2018-09-27 Magic Leap, Inc. Depth based foveated rendering for display systems
JP2018173789A (en) 2017-03-31 2018-11-08 富士通株式会社 Information processing device
US10306217B2 (en) * 2015-04-17 2019-05-28 Seiko Epson Corporation Display device, control method for display device, and computer program
US10623628B2 (en) * 2016-09-27 2020-04-14 Snap Inc. Eyewear device input mechanism
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system
US11582441B2 (en) * 2018-12-04 2023-02-14 Maxell, Ltd. Head mounted display apparatus

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6045229A (en) 1996-10-07 2000-04-04 Minolta Co., Ltd. Method and apparatus for displaying real space and virtual space images
JPH10112831A (en) 1996-10-07 1998-04-28 Minolta Co Ltd Display method and display device for real space image and virtual space image
JPH10327433A (en) 1997-05-23 1998-12-08 Minolta Co Ltd Display device for composted image
US6084557A (en) * 1997-05-23 2000-07-04 Minolta Co., Ltd. System for displaying combined imagery
US20060170652A1 (en) 2005-01-31 2006-08-03 Canon Kabushiki Kaisha System, image processing apparatus, and information processing method
JP2006209664A (en) 2005-01-31 2006-08-10 Canon Inc System, image processor and image processing method
US20150156481A1 (en) * 2009-03-19 2015-06-04 Real Time Companies Heads up display (hud) sensor system
US20120206452A1 (en) 2010-10-15 2012-08-16 Geisner Kevin A Realistic occlusion for a head mounted augmented reality display
US20130011045A1 (en) * 2011-07-07 2013-01-10 Samsung Electronics Co., Ltd. Apparatus and method for generating three-dimensional (3d) zoom image of stereo camera
US20130141419A1 (en) 2011-12-01 2013-06-06 Brian Mount Augmented reality with realistic occlusion
US20140184496A1 (en) * 2013-01-03 2014-07-03 Meta Company Extramissive spatial imaging digital eye glass apparatuses, methods and systems for virtual or augmediated vision, manipulation, creation, or interaction with objects, materials, or other entities
US20140333665A1 (en) * 2013-05-10 2014-11-13 Roger Sebastian Sylvan Calibration of eye location
JP2015158862A (en) 2014-02-25 2015-09-03 株式会社ソニー・コンピュータエンタテインメント Image generation device, image generation method, program, and computer readable information storage medium
US9563982B2 (en) 2014-02-25 2017-02-07 Sony Corporation Image generating device, image generating method, program, and computer-readable information storage medium
US20150356785A1 (en) * 2014-06-06 2015-12-10 Canon Kabushiki Kaisha Image synthesis method and image synthesis apparatus
JP2015230695A (en) 2014-06-06 2015-12-21 キヤノン株式会社 Information processing device and information processing method
US10306217B2 (en) * 2015-04-17 2019-05-28 Seiko Epson Corporation Display device, control method for display device, and computer program
US10623628B2 (en) * 2016-09-27 2020-04-14 Snap Inc. Eyewear device input mechanism
US20180275410A1 (en) 2017-03-22 2018-09-27 Magic Leap, Inc. Depth based foveated rendering for display systems
JP2018173789A (en) 2017-03-31 2018-11-08 富士通株式会社 Information processing device
US20200368616A1 (en) * 2017-06-09 2020-11-26 Dean Lindsay DELAMONT Mixed reality gaming system
US11582441B2 (en) * 2018-12-04 2023-02-14 Maxell, Ltd. Head mounted display apparatus

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
International Search Report issued in corresponding International Patent Application No. PCT/JP2018/044569, dated Feb. 26, 2019, with English translation.
Notice of Allowance dated Oct. 27, 2022 in U.S. Appl. No. 17/299,114.
Notice of Reasons for Refusal dated Nov. 14, 2023 issued in the corresponding Japanese Patent Application No. 2022-151752, with English machine translation.
Office Action dated Aug. 2, 2022 in U.S. Appl. No. 17/299,114.
U.S. PTO Notice of Allowance issued in related parent U.S. Appl. No. 17/299,114 dated Oct. 27, 2022.
U.S. PTO Office Action issued in related parent U.S. Appl. No. 17/299,114 dated Aug. 2, 2022.

Also Published As

Publication number Publication date
JP2022183177A (en) 2022-12-08
WO2020115815A1 (en) 2020-06-11
US20230156176A1 (en) 2023-05-18
JP7148634B2 (en) 2022-10-05
CN113170090A (en) 2021-07-23
JPWO2020115815A1 (en) 2021-09-30
US11582441B2 (en) 2023-02-14
US20220060680A1 (en) 2022-02-24

Similar Documents

Publication Publication Date Title
US11956415B2 (en) Head mounted display apparatus
US20230269358A1 (en) Methods and systems for multiple access to a single hardware data stream
EP3712840A1 (en) Method and system for generating an image of a subject in a scene
US8780178B2 (en) Device and method for displaying three-dimensional images using head tracking
KR101661991B1 (en) Hmd device and method for supporting a 3d drawing with a mobility in the mixed space
CN114945853A (en) Compensation for distortion in head mounted display systems
JP2006262397A (en) Stereoscopic vision image displaying device and method, and computer program
CN108259883B (en) Image processing method, head-mounted display, and readable storage medium
WO2014128748A1 (en) Calibration device, calibration program, and calibration method
KR20150088355A (en) Apparatus and method for stereo light-field input/ouput supporting eye-ball movement
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
JP4580678B2 (en) Gaze point display device
WO2014128750A1 (en) Input/output device, input/output program, and input/output method
JP6509101B2 (en) Image display apparatus, program and method for displaying an object on a spectacle-like optical see-through type binocular display
JP6446465B2 (en) I / O device, I / O program, and I / O method
US11627303B2 (en) System and method for corrected video-see-through for head mounted displays
CN109255838A (en) Augmented reality is avoided to show the method and apparatus of equipment viewing ghost image
KR101172507B1 (en) Apparatus and Method for Providing 3D Image Adjusted by Viewpoint
CN115202475A (en) Display method, display device, electronic equipment and computer-readable storage medium
JP5402070B2 (en) Image presentation system, image processing apparatus, program, and image presentation method
JP6479835B2 (en) I / O device, I / O program, and I / O method
JPWO2017191703A1 (en) Image processing device
US20190114838A1 (en) Augmented reality system and method for providing augmented reality
TW202332267A (en) Display system with machine learning (ml) based stereoscopic view synthesis over a wide field of view
JP6479836B2 (en) I / O device, I / O program, and I / O method

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: MAXELL, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAWAMAE, OSAMU;SHIMADA, KENICHI;KATSUKI, MANABU;AND OTHERS;SIGNING DATES FROM 20210518 TO 20210616;REEL/FRAME:063919/0739

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE