WO2020215960A1 - 注视区域的确定方法、装置及可穿戴设备 - Google Patents
注视区域的确定方法、装置及可穿戴设备 Download PDFInfo
- Publication number
- WO2020215960A1 WO2020215960A1 PCT/CN2020/080961 CN2020080961W WO2020215960A1 WO 2020215960 A1 WO2020215960 A1 WO 2020215960A1 CN 2020080961 W CN2020080961 W CN 2020080961W WO 2020215960 A1 WO2020215960 A1 WO 2020215960A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- area
- eye
- virtual
- display screen
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Definitions
- the embodiments of the present disclosure relate to a method, device and wearable device for determining a gaze area.
- VR Virtual reality
- Three-dimensional environment ie, virtual scene
- users with a sense of immersion through the three-dimensional environment.
- wearable devices using VR technology can target the images displayed on their displays
- the part of the image that the user looks at is presented as a high-definition image, while other image parts are presented as non-high-definition images.
- the related art provides a method for determining the gaze area, which can be used to determine the part of the image that the user is gazing at.
- the gaze point of the user's left eye is determined according to the gaze point information of the user's left eye and the gaze point information of the right eye.
- Area and right eye fixation area is provided.
- the wearable device includes a first display component and a second display component.
- the first display component includes a first display screen and A first lens located on the light-emitting side of the first display screen
- the second display assembly includes a second display screen and a second lens located on the light-emitting side of the second display screen
- the method includes: obtaining a first target eye The gaze point on the first display screen; a target virtual area is determined according to the gaze point and the angle of view of the first target eye, the target virtual area being in the three-dimensional environment presented by the wearable device
- An area located within the visible range of the first target eye; a first virtual image of the target is determined according to the gaze point and the angle of view of the first target eye, the first virtual image of the target being the first display screen
- the first virtual image of the displayed image formed by the first lens is located within the visible range of the first target eye; the target virtual area is determined to be located in the three
- the determining the target virtual area according to the gaze point and the field angle of the first target eye includes: according to the gaze point and the field angle of the first target eye Determining the visible range of the first target eye; determining an area in the three-dimensional environment within the visible range of the first target eye as the target virtual area.
- determining the second target virtual image according to the target virtual area and the position of the second target eye includes: determining the second target virtual image according to the target virtual area and the position of the second target eye The visible range of the second target eye in the three-dimensional environment; determining the portion of the second virtual image that is located within the visible range of the second target eye in the three-dimensional environment as the second target virtual image.
- the determining the first virtual image of the target according to the gaze point and the angle of view of the first target eye includes: according to the position of the first target eye, the gaze point, and The field angle of the first target eye determines the visible range of the first target eye; the part of the first virtual image located in the visible range of the first target eye is determined as the first target virtual image .
- the first gaze area of the first target eye in the image displayed on the first display screen is determined according to the first virtual image of the target and the second virtual image of the target
- the The second gaze area of the second target eye in the image displayed on the second display screen includes: acquiring a first corresponding area of the first target virtual image in the image displayed on the first display screen, and The second corresponding area of the second target virtual image in the image displayed on the second display screen; and determining the first corresponding area as the first gaze area, and determining the second corresponding area as the The second gaze area.
- Various embodiments of the present disclosure provide an apparatus for determining a gaze area, which is suitable for a wearable device including a first display component and a second display component, and the first display component includes a first display screen And a first lens located on the light exit side of the first display screen, the second display assembly includes a second display screen and a second lens located on the light exit side of the second display screen, and the gaze area determining device includes:
- An acquiring module configured to acquire the gaze point of the first target eye on the first display screen
- the first determining module is configured to determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is a three-dimensional environment presented by the wearable device located in the first target The area within the visual range of the eye;
- the second determining module is configured to determine a first virtual image of the target according to the gaze point and the angle of view of the first target eye, where the virtual image of the first target is an image displayed on the first display screen passing through the first In the first virtual image formed by the lens, a virtual image located within the visible range of the first target eye;
- the third determining module is configured to determine the target virtual area as an area within the visible range of the second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is left Eyes other than the first target eye among the eyes and the right eye;
- the fourth determining module is configured to determine a second virtual target image according to the target virtual area and the position of the second target eye, where the second virtual target image is an image displayed on the second display screen through the second lens Among the formed second virtual images, the virtual images located within the visible range of the second target eye;
- the fifth determining module is configured to determine the first gaze area of the first target eye in the image displayed on the first display screen according to the first virtual image of the target and the second virtual image of the target, and the second The second gaze area of the second target eye in the image displayed on the second display screen.
- the first determining module is configured to: determine the visual range of the first target eye according to the gaze point and the angle of view of the first target eye; An area within the visual range of the first target eye in the environment is determined as the target virtual area.
- the fourth determining module is configured to: determine the visible range of the second target eye according to the target virtual area and the position of the second target eye; and The part of the two virtual images located within the visible range of the second target eye is determined to be the second target virtual image.
- the second determining module is configured to determine the first target eye according to the position of the first target eye, the gaze point, and the angle of view of the first target eye Visible range;
- the part of the first virtual image located within the visible range of the first target eye is determined as the first target virtual image.
- a wearable device including: an image acquisition component, a first display component, and a second display component.
- the first display component includes a first display screen and is located on the first display screen.
- the first lens on the light exit side, the second display assembly includes a second display screen and a second lens on the light exit side of the second display screen; the wearable device also includes any one of the above-mentioned gaze area determining devices.
- the device for determining a gaze area is suitable for a wearable device.
- the wearable device includes a first display component and a second display component.
- the component includes a first display screen and a first lens located on the light-emitting side of the first display screen, and the second display component includes a second display screen and a second lens located on the light-emitting side of the second display screen.
- the region determining device includes: a processor; a memory, the memory is configured to store instructions executable by the processor, and when the instructions are executed by the processor, the processor is configured to perform any of the above Area determination method.
- Various embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, any one of the above-mentioned methods for determining a gaze area is implemented.
- FIG. 1 is a schematic diagram of a left-eye high-definition image and a right-eye high-definition image determined by a method for determining a gaze area in related technologies;
- Fig. 2 is a schematic structural diagram of a wearable device according to an embodiment of the present disclosure
- Fig. 3 is a schematic diagram of a human eye viewing an image in a display screen through a lens according to an embodiment of the present disclosure
- FIG. 4 is a method flowchart of a method for determining a gaze area provided by an embodiment of the present disclosure
- FIG. 5 is a method flowchart of another method for determining a gaze area provided by an embodiment of the present disclosure
- Fig. 6 is a flow chart of a method for determining a target virtual area according to the gaze point and the field angle of the first target eye according to an embodiment of the present disclosure
- FIG. 7 is a schematic diagram of a visual range of a first target eye according to an embodiment of the present disclosure.
- FIG. 8 is a flowchart of a method for determining a virtual image of a first target according to the gaze point and the angle of view of the first target eye according to an embodiment of the present disclosure
- FIG. 9 is a flowchart of a method for determining a second target virtual image according to the target virtual area and the position of the second target eye according to an embodiment of the present disclosure
- FIG. 10 is a schematic diagram of determining a gaze area provided by an embodiment of the present disclosure.
- FIG. 11 is a block diagram of a device for determining a gaze area provided by an embodiment of the present disclosure.
- FIG. 12 is a schematic structural diagram of a wearable device provided by an embodiment of the present disclosure.
- VR technology is a technology that uses wearable devices to close people's vision and even hearing from the outside world to guide users to produce a sense of being in a virtual three-dimensional environment.
- the display principle is that the display screens corresponding to the left and right eyes respectively display the images for the left and right eyes. Due to the parallax of the human eyes, the brain produces a close-to-real three-dimensional effect after obtaining images with differences through the human eyes.
- VR technology is usually implemented by a VR system.
- the VR system may include a wearable device and a VR host, where the VR host may be integrated in the wearable device, or an external device that can be wired or wirelessly connected to the wearable device.
- the VR host is used to render the image and send the rendered image to the wearable device, and the wearable device is used to receive and display the rendered image.
- Eye tracking (English: Eye Tracking), also known as eye tracking, is a way to analyze the eye movement information of the human eye by collecting the human eye image of the human eye, and based on the eye movement information, determine that the human eye is on the display screen. The focus of the technology. Further, in the eye tracking technology, according to the determined gaze point of the human eye on the display screen, the gaze area of the human eye on the display screen can be determined.
- SmartView is a technical solution that combines VR technology with Eye Tracking technology to achieve high-definition VR technology.
- the technical solution includes: first accurately track the user's gaze area on the display screen through Eye Tracking technology, and then only perform high-definition rendering of the gaze area, and perform non-high-definition rendering of other areas, and integrated circuits (English: Integrated Circuit, abbreviated as : IC) can process rendered non-high definition images (also called low definition images or low definition images) into high resolution images and display them on the display.
- the display screen may be a liquid crystal display (English: Liquid Crystal Display, abbreviation: LCD) screen or an organic light emitting diode (English: Organic Light-Emitting Diode, abbreviation: OLED) display screen, etc.
- Unity also known as Unity Engine, is a multi-platform comprehensive game development tool developed by Unity Technologies and a fully integrated professional game engine. Unity can be used to develop VR technology.
- the Eye Tracking technology requires two cameras to be installed in the wearable device.
- the two cameras can separately collect the left and right eye images (the person The eye image is also called the gaze point image, etc.), and the gaze point coordinates are calculated by the VR host based on the human eye image.
- the two cameras provided in the wearable device of the VR system greatly increase the weight and cost of the wearable device, which is not conducive to the general promotion of the VR system.
- this technical solution does not take into account the human visual characteristics: because the left eye and the right eye are located at different positions in space, the viewing angles of the left eye and the right eye when viewing the object are different, which results in the same object in the left eye field of view and The position in the field of view of the right eye is different, and the images seen by the two eyes are actually not completely overlapped. Therefore, if the left-eye fixation point coordinates and the right-eye fixation point coordinates are calculated from the left-eye image and the right-eye image respectively, the left-eye fixation point coordinates and the right-eye fixation point coordinates do not actually coincide in the display screen. If the right-eye fixation point area of the left-eye fixation point area is further determined according to the fixation point coordinates of the left and right eyes, it is difficult for the left-eye fixation point area and the right-eye fixation point area to completely overlap.
- FIG. 1 shows a left-eye high-definition image 11 and a right-eye high-definition image 12 obtained after high-definition rendering of the fixation point regions of the left and right eyes respectively. It can be seen from FIG. 1 that only the middle part of the left-eye high-definition image 11 and the right-eye high-definition image 12 overlap.
- the visual experience presented to the user is that the user can see the high-definition image area 13, the high-definition image area 14, and the high-definition image area 15 in the visual field of the left and right eyes.
- the high-definition image area 13 is a high-definition image area that can be seen by both left and right eyes
- the high-definition image area 14 is a high-definition image area that can be seen only by the left eye
- the high-definition image area 15 is a high-definition image area that can be seen only by the right eye.
- the high-definition image area 14 and the high-definition image area 15 are only the high-definition image areas that can be seen by one of the eyes, when the user looks at the display screen at the same time, the user’s viewing experience will be affected.
- the image areas 15 and the high-definition image area 14 and the high-definition image area 15 will present a relatively obvious boundary line, which further affects the user's visual experience.
- Various embodiments of the present disclosure provide a method for determining a gaze area, which can ensure that the determined gaze areas of the left and right eyes overlap, so that the user's left and right eyes can view the fully overlapping high-definition images, which effectively improves the user experience.
- the wearable device 20 may include a first display component 21 and a second display component 22.
- the first display component 21 includes a first display screen 211 and a first display device located on the light emitting side of the first display screen 211.
- the second display assembly 22 includes a second display screen 221 and a second lens 222 located on the light emitting side of the second display screen 221.
- the lenses ie, the first lens 212 and the second lens 222
- the human eye observes the first virtual image 213 corresponding to the image displayed on the first display screen 211 through the first lens 212, and the first virtual image 213 is usually the first display
- the image displayed on the screen 211 is an enlarged image.
- the wearable device may further include an image acquisition component, the image acquisition component may be an eye tracking camera, the eye tracking camera is integrated around at least one of the first display screen and the second display screen of the wearable device, It is used to collect the human eye image corresponding to the at least one display screen in real time, and send it to the VR host, and the VR host processes the human eye image to determine the gaze point coordinates of the human eye on the display screen.
- the device for determining the gaze area acquires the coordinates of the gaze point.
- the wearable device also includes a gaze area determining device, the gaze area determining device can be integrated into the wearable device through software or hardware, or integrated into the VR host, the gaze area determining device can be configured to execute the following gaze area Method of determining.
- FIG. 4 shows a flowchart of a method for determining a gaze area provided by an embodiment of the present disclosure.
- the method may include the following steps:
- Step S201 Obtain the gaze point of the first target eye on the first display screen, where the first target eye is the left eye or the right eye.
- Step S202 Determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is the three-dimensional environment presented by the wearable device within the visible range of the first target eye area.
- Step S203 Determine a first virtual image of the target according to the gaze point and the angle of view of the first target eye, where the first virtual image of the target is formed by the image displayed on the first display screen through a first lens The part of the first virtual image located within the visible range of the first target eye.
- Step S204 Determine the target virtual area as the area within the visible range of the second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is divided by the left eye and the right eye. Eyes other than the first target eye.
- Step S205 Determine a second virtual image of the second target according to the target virtual area and the position of the second target eye, where the second virtual image of the second virtual image formed by the image displayed on the second display screen passes through the second lens. Describe the part within the visible range of the second target eye.
- Step S206 According to the first virtual image of the target and the second virtual image of the second target, determine the first gaze area of the first target eye in the image displayed on the first display screen, and the location of the second target eye The second gaze area in the image displayed on the second display screen.
- the gaze point of the first target eye on the first display screen and the field angle of the first target eye are used to determine The target virtual area, and determining the target virtual area as an area within the visual range of the second target eye, so as to determine the visual range of the second target eye, thereby determining the The first virtual image seen by the first target eye and the second virtual image seen by the second target eye can determine that the first target eye is in the image displayed on the first display screen The first gaze area and the second gaze area of the second target eye in the image displayed on the second display screen.
- the first gaze area of the first target eye on the first display screen and the second gaze area of the second target eye on the second display screen are determined by the same target virtual area Therefore, the first gaze area and the second gaze area can be completely overlapped, which effectively improves the display effect of the image in the wearable device and enhances the user's visual experience.
- FIG. 5 shows a flow chart of a method for determining a gaze area according to another embodiment of the present disclosure.
- the method for determining a gaze area can be executed by a device for determining a gaze area and is suitable for wearable devices.
- the structure of the wearable device can be referred to The wearable device shown in Figure 2 above.
- the method for determining the gaze area may include the following steps:
- Step S301 Obtain the gaze point of the first target eye on the first display screen.
- an eye tracking camera may be arranged around the first display screen of the wearable device, and the eye tracking camera may collect the eye image of the corresponding first target eye in real time, and the VR host Determine the gaze point coordinates of the first target eye on the first display screen according to the human eye image.
- the gaze area determining device obtains the gaze point coordinates.
- Step S302 Determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is the visible range of the three-dimensional environment presented by the wearable device and located in the first target eye Within the area.
- determining the target virtual area according to the gaze point and the angle of view of the first target eye may include:
- Step S3021 Determine the visible range of the first target eye according to the gaze point and the angle of view of the first target eye.
- the viewing angle of the first target eye may be composed of a horizontal viewing angle and a vertical viewing angle, and the area located within the horizontal viewing angle and the vertical viewing angle of the first target eye is the viewing range of the first target eye .
- the actual viewing angle of the human eye is limited. Generally speaking, the maximum horizontal viewing angle of the human eye is 188 degrees, and the maximum vertical viewing angle is 150 degrees. Normally, no matter how the human eye rotates, the field angle of the human eye remains unchanged. According to the gaze point of the first target eye and the horizontal and vertical field angles of the first target eye, you can Determine the visible range of the first target eye.
- the viewing angles of the left eye and the right eye may be different. Considering individual differences, the viewing angles of different people may also be different, which is not limited in the embodiments of the present disclosure.
- Figure 7 schematically shows the gaze point G of the first target eye O, the horizontal field angle a, the vertical field angle b of the first target eye, and the visual range of the first target eye (i.e. point O, point P , Point Q, point M, and point N).
- Step S3022 Determine an area within the visible range of the first target eye in the three-dimensional environment as the target virtual area.
- determining the target virtual area may include the following steps:
- Step A1 At least two rays are emitted from the position of the first target eye (for ease of description, the position of the first target eye is regarded as a point), and the at least two rays are respectively along the angle of view of the first target eye
- the boundary shoots out.
- the Unity engine can emit at least two rays from the position of the first target eye (the ray is a virtual ray), that is, draw at least two rays from the position of the first target eye as the starting point, and the at least two rays can be They are emitted along the boundary of the field of view of the first target eye.
- a first virtual camera and a second virtual camera are respectively provided at the position of the first target eye and the position of the second target eye.
- the images seen by the user's left and right eyes through the first display screen and the second display screen in the wearable device come from the images taken by the first virtual camera and the second virtual camera, respectively.
- the position of the first target eye is the position of the first virtual camera in the wearable device, in the embodiments of the present disclosure, the position of the target eye can be characterized by the position of the virtual camera, and the Unity engine can start from the first virtual camera. At least two rays are emitted from the position of the camera.
- Step A2 Obtain at least two points where at least two rays come into contact with the virtual area, and use the at least two points as calibration points respectively.
- the at least two rays will come into contact with the three-dimensional environment presented by the wearable device, that is, the virtual area to generate contact points.
- the Unity engine when a ray with physical properties collides with a collider on the surface of a virtual object, the Unity engine can determine the coordinates of the collision point, that is, the coordinates of the surface of the virtual object.
- Step A3 Determine the area enclosed by the at least two calibration points in the virtual area as the target virtual area.
- the geometric figure of the target virtual image area can be determined in advance, the at least two calibration points are connected according to the geometric figure, and the area enclosed by the connecting line is determined as the target virtual area. If there are only two calibration points, if the two coordinates of the two calibration points on the surface of the virtual object are relatively different, it can be determined that the two calibration points are close to the OQ and ON, as shown in Figure 7, respectively.
- the target virtual area can be determined according to the coordinates of the two calibration points, according to the horizontal field of view and the vertical field of view.
- object recognition can also be performed on the area enclosed by the line to extract valid objects in the enclosed area, while ignoring invalid objects in the enclosed area (For example, sky and other background), determining the area where the effective object is located as the target virtual area.
- Step S303 Determine the first virtual image of the first target according to the gaze point and the angle of view of the first target eye.
- the first virtual image of the first target is the first virtual image formed by the image displayed on the first display screen through the first lens and located in the first target eye. The part within the visible range.
- the left and right eyes see the first virtual image and the second virtual image through the first lens and the second lens.
- the two eyes simultaneously obtain the first virtual image and the second virtual image. Formed a three-dimensional image with depth.
- the first virtual image seen by the first target eye and the second virtual image seen by the second target eye need to be re-identified.
- the re-identified first virtual image and the second virtual image may be transparent.
- determining the first virtual image of the target according to the gaze point and the angle of view of the first target eye may include:
- Step S3031 Determine the visible range of the first target eye according to the position, the gaze point, and the angle of view of the first target eye.
- step S303 For the implementation of step S3031, reference may be made to the related description of step S3021 above, and details are not repeated in the embodiment of the present disclosure.
- Step S3032 Determine the part of the first virtual image located in the visible range of the first target eye as the first target virtual image.
- determining the first target virtual image may include the following steps:
- Step B1 At least two rays are emitted from the position of the first target eye, and the at least two rays are respectively emitted along the boundary of the field of view of the first target eye.
- step B1 reference may be made to the related description of the above step A1, and the details are not repeated here in the embodiment of the present disclosure.
- Step B1 is to characterize the visible range of the first target eye by means of rays, so as to accurately determine the first virtual image of the target located in the first virtual image.
- Step B2 Acquire at least two first contact points where the at least two rays contact the first virtual image.
- the at least two rays will contact the first virtual image to form at least two first contact points.
- Step B3 Determine the area enclosed by the at least two first contact points as the first target virtual image.
- the first virtual image of the target can be determined according to a predetermined geometric figure, or object recognition can be performed on the enclosed area to determine the recognized object as the first target Virtual image.
- Step S304 Determine the target virtual area as an area within the range of the angle of view of the second target eye in the three-dimensional environment presented by the wearable device.
- the target virtual area determined according to the gaze point of the first target eye and the angle of view of the first target eye is determined to be within the range of the angle of view of the second target eye
- the area can ensure that the fixed gaze areas of the eyes overlap.
- Step S305 Determine a second target virtual image according to the target virtual area and the position of the second target eye.
- the second target virtual image is the image displayed on the second display screen and formed by the second lens.
- the second virtual image is located on the second target. The part of the eye's field of view.
- determining the second virtual image of the target according to the position of the target virtual area and the second target eye may include:
- Step S3051 Determine the visible range of the second target eye in the three-dimensional environment according to the target virtual area and the position of the second target eye.
- At least two rays are emitted from the position of the second target eye, and the at least two rays are respectively connected to at least two calibration points surrounding the target virtual area.
- the position of the second target eye and the space area enclosed by the at least two calibration points are the visible range of the second target eye in the three-dimensional environment.
- the visible range of the second target eye in the three-dimensional environment is a partial spatial area within the visible range of the second target eye.
- the Unity engine can control the position of the second target eye to emit at least two rays, that is, draw at least two rays with the position of the second target eye as the starting point and at least two calibration points as the ending point.
- the position of the second target eye is the position of the second virtual camera in the wearable device.
- Step S3052 Determine the part of the second virtual image that is located within the visible range of the second target eye in the three-dimensional environment as the second target virtual image.
- determining the second target virtual image in step S3052 may include the following steps:
- Step C1 Acquire at least two second contact points where at least two rays contact the second virtual image. In the extending direction of the at least two rays, the at least two rays will contact the second dashed line to form at least two second contact points.
- Step C2 Determine the area enclosed by the at least two first contact points as the second target virtual image.
- the second target virtual image may be determined according to a predetermined geometric figure. If object recognition is performed on the area enclosed by the at least two first contact points in step B3, and the recognized object is determined to be the first target virtual image, then in step C2, the same applies to the at least two first contact points.
- the area enclosed by a contact point performs object recognition, and the recognized object is determined as the second target virtual image.
- step B3 and step C2 the same algorithm and the same algorithm parameters should be used to identify the enclosed area to ensure the recognition The objects are consistent.
- Step S306 Obtain a first corresponding area of the first virtual image of the first target in the image displayed on the first display screen, and a second corresponding area of the second virtual image of the second target in the image displayed on the second display screen .
- the at least two first contact points and at least two second contact points are respectively converted into at least two first image points in the image displayed on the first display screen and at least two of the images displayed on the second display screen Second image point.
- the target virtual image is distorted compared to the target image.
- the corresponding relationship between the virtual image coordinates and the image coordinates is recorded in the anti-distortion grid.
- the at least two first contact points and the at least two second contact points are both virtual image coordinates located in a virtual image
- the at least two first image points and the at least two second image points are both Are the image coordinates in the image displayed on the screen (the screen coordinates correspond to the image coordinates displayed on the screen), therefore, based on the correspondence between the virtual image coordinates and the image coordinates in the anti-distortion grid, the at least two first contacts can be The points are converted into at least two first image points in the image displayed on the first display screen and the at least two second contact points are converted into at least two second image points in the image displayed on the second display screen.
- the first corresponding area is determined according to the at least two first image points.
- the area enclosed by the at least two first image points is determined as the first corresponding area, or the enclosed area may be subject to object Recognition, the recognized object is determined as the first corresponding area;
- the second corresponding area is determined according to at least two second image points, and optionally, the area enclosed by the at least two second image points is determined as the second Corresponding area, or object recognition can be performed on the enclosed area, and the recognized object is determined as the second corresponding area.
- Step S307 Determine the first corresponding area as the first gaze area.
- Step S308 Determine the second corresponding area as the second gaze area.
- the gaze point of the first target eye on the first display screen and the field angle of the first target eye are used to determine The target virtual area, and the target virtual area is determined as an area within the visual range of the second target eye, so as to determine the visual range of the second target eye, and then the view of the first target eye can be determined
- the first virtual image obtained and the second virtual image seen by the second target eye can thereby determine the first gaze of the first target eye in the image displayed on the first display screen Area and the second gaze area of the second target eye in the image displayed on the second display screen.
- first gaze area of the first target eye on the first display screen and the second gaze area of the second target eye on the second display screen are composed of the same target virtual area It is determined that the first gaze area and the second gaze area can be completely overlapped, which solves the problem that the gaze areas of the left and right eyes are difficult to completely overlap in the related art, resulting in poor image display effects in wearable devices, and effectively improves The display effect of the image in the wearable device improves the user's visual experience.
- the gaze point of the first target eye on the display screen can be obtained through an eye tracking camera. Therefore, the embodiment of the present disclosure is applied In the wearable device of the provided method for determining the gaze area, only one eye tracking camera can be set. Compared with related technologies, eye tracking cameras need to be set for the left and right eyes separately, and the human eye images of the left and right eyes are collected separately. Analyze the gaze points of the left and right eyes to determine the gaze area of the left and right eyes.
- the method for determining the gaze area provided by the embodiments of the present disclosure can effectively reduce the weight and cost of the wearable device, which is beneficial to the popularization of the wearable device.
- step 307 and step 308 can be performed at the same time or step S308 can be performed first, and then step S307 can be performed.
- step S303 and step S304 can be performed simultaneously or first.
- Step 304 executes step 303 again.
- the method for determining the fixation area includes the following steps:
- Step S1 Obtain the gaze point S of the first target eye 213 on the first display screen 211.
- Step S2 Determine the field angle ⁇ of the first target eye 213 according to the gaze point S.
- the description is made by taking the viewing angle as the horizontal viewing angle as an example.
- Step S3 Two rays are emitted from the position of the first target eye 213 along the boundary of the field of view ⁇ of the first target eye 213, and the two contact points where the two rays come into contact with the virtual area 23 are obtained, and the The two contact points are determined as a first calibration point S1 and a second calibration point S2, and the area enclosed by the first calibration point and the second calibration point in the virtual area 23 is determined as a target virtual area.
- the description will be given by taking an example that the area between the line between the calibration point S1 and the calibration point S2 represents the target virtual area.
- Step S4 Acquire the first contact point C'and the second contact point A'where the two rays emitted from the point where the position of the first target eye 213 is in contact with the first virtual image 214, respectively, according to the first contact point C'and the second contact point A'.
- the second contact point A' determines the virtual image of the first target.
- the area between the line of the first contact point C'and the second contact point A' represents the first target virtual image as an example for description.
- Step S5 Determine the target virtual area as an area in which the three-dimensional environment presented by the wearable device is located within the field of view range of the field angle ⁇ of the second target eye 223.
- the area between the line of the third contact point D'and the fourth contact point B' represents the second target virtual image as an example for description.
- Step S7 Convert the first contact point C'to the first image point C in the image displayed on the first display screen, and convert the second contact point A'to the second image in the image displayed on the first display screen Point A, transform the third contact point D'into the third image point D in the image displayed on the second display screen, and transform the fourth contact point B'into the fourth image in the image displayed on the second display screen
- the first gaze area is determined according to the first image point C and the second image point A
- the second gaze area is determined according to the third image point D and the fourth image point B.
- the first virtual image 214 and the second virtual image 224 are overlapped, but in order to facilitate the description of the method of determining the gaze area, the first virtual image 214 and the second virtual image 214
- the two virtual images 224 are shown as not overlapping.
- the calibration point S1 and the calibration point S2 used to represent the target virtual area, the gaze point S, etc. are all illustrative descriptions.
- FIG. 11 shows a gaze area determining device 30 according to an embodiment of the present disclosure.
- the gaze area determining device 30 can be applied to the wearable device shown in FIG. 2, and the gaze area determining device 30 includes:
- the obtaining module 301 is configured to obtain the gaze point of the first target eye on the first display screen, where the first target eye is a left eye or a right eye;
- the first determining module 302 is configured to determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is a three-dimensional environment presented by the wearable device located in the first The area within the visual range of the target eye;
- the second determining module 303 is configured to determine the first virtual image of the first target according to the gaze point and the angle of view of the first target eye, where the first virtual image is the image displayed on the first display The part of the first virtual image formed by the first lens that is located within the visible range of the first target eye;
- the third determining module 304 is configured to determine the target virtual area as an area where the three-dimensional environment presented by the wearable device is located within the visual range of the second target eye;
- the fourth determining module 305 is configured to determine a second virtual target image according to the target virtual area and the position of the second target eye, where the second virtual target image is the image displayed on the second display screen passing through the second The part of the second virtual image formed by the lens that is located within the visible range of the second target eye;
- the fifth determining module 306 is configured to determine a first gaze area of the first target eye in the image displayed on the first display screen according to the first virtual image of the target and the second virtual image of the target, and the The second gaze area of the second target eye in the image displayed on the second display screen.
- the target virtual area is determined by the gaze point of the first target eye on the first display screen and the angle of view of the first target eye, and the target virtual area is determined to be within the visible range of the second target eye
- the area of the second target eye can be used to determine the visible range of the second target eye, and then the first virtual image seen by the first target eye and the second virtual image seen by the second target eye can be determined.
- the first gaze area and the second gaze area can be accurately overlapped, which solves the problem that the gaze areas of the left and right eyes in the related technology are difficult to completely overlap and lead to wearable
- the problem of poor image display effect in the device effectively improves the display effect of the image in the wearable device and improves the user's visual experience.
- the first determining module 302 is configured to:
- the fourth determining module 305 is configured to:
- the part of the second virtual image located within the visible range of the second target eye is determined as the second target virtual image.
- the second determining module 303 is configured to:
- Determining the visible range of the first target eye according to the position of the first target eye, the gaze point, and the angle of view of the first target eye;
- the part of the first virtual image located within the visible range of the first target eye is determined as the first target virtual image.
- the target virtual area is determined by the gaze point of the first target eye on the first display screen and the angle of view of the first target eye, and the target virtual area is determined to be within the visible range of the second target eye
- the area of the second target eye can be used to determine the visible range of the second target eye, and then the first virtual image seen by the first target eye and the second virtual image seen by the second target eye can be determined.
- the first gaze area and the second gaze area can be accurately overlapped, which solves the problem that the gaze areas of the left and right eyes in the related technology are difficult to completely overlap and lead to wearable
- the problem of poor image display effect in the device effectively improves the display effect of the image in the wearable device and improves the user's visual experience.
- FIG. 12 shows a schematic structural diagram of a wearable device 20 according to another embodiment of the present disclosure.
- the wearable device 20 includes a gaze area determining device 24, an image capture component 23, a first display component 21, and a second display component 22 .
- the device 24 for determining the gaze area may be the device 30 for determining the gaze area shown in FIG. 10, and the image acquisition component 23, the first display component 21, and the second display component 22 can refer to the foregoing introduction, and the description of the embodiments of the present disclosure will not be repeated here. .
- At least one embodiment of the present disclosure also provides an apparatus for determining a gaze area.
- the apparatus for determining a gaze area is suitable for a wearable device.
- the wearable device includes a first display component and a second display component.
- the component includes a first display screen and a first lens located on the light-emitting side of the first display screen, and the second display component includes a second display screen and a second lens located on the light-emitting side of the second display screen.
- the region determining device includes: a processor; a memory, the memory is configured to store instructions executable by the processor, and when the instructions are executed by the processor, the processor is configured to:
- the target virtual area as an area within the visual range of the second target eye in the three-dimensional environment presented by the wearable device
- a second virtual target image is determined according to the target virtual area and the position of the second target eye, where the second virtual target image is the second virtual image formed by the second lens through the image displayed on the second display screen. The part within the visible range of the second target eye;
- the first virtual image of the target and the second virtual image of the second target determine the first gaze area of the first target eye in the image displayed on the first display screen, and the second target eye in the first The second gaze area in the image displayed on the second display screen.
- the processor when the processor is configured to determine the target virtual area according to the gaze point and the angle of view of the first target eye, the processor is configured to:
- the processor when the processor is configured to determine a second target virtual image according to the target virtual area and the position of the second target eye, the processor is configured to: virtualize according to the target The area and the position of the second target eye determine the visible range of the second target eye in the three-dimensional environment; and the second virtual image is located in the three-dimensional environment of the second target eye. The part within the viewing range is determined as the second target virtual image.
- the processor when the processor is configured to determine the first target virtual image according to the gaze point and the angle of view of the first target eye, the processor is configured to
- the processor when the processor is configured to determine the position of the first target eye in the image displayed on the first display screen according to the first target virtual image and the second target virtual image
- the processor is configured to: acquire the first target virtual image in the first display A first corresponding area in the image displayed on the screen, and a second corresponding area in the image displayed on the second display screen of the second virtual image of the second target; determining the first corresponding area as the first gaze area And determining the second corresponding area as the second gaze area.
- At least one embodiment of the present disclosure also provides a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, any one of the above methods is implemented.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (12)
- 一种注视区域确定方法,适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述方法包括:获取第一目标眼在所述第一显示屏上的注视点;根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境中位于所述第一目标眼的可视范围内的区域;根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像位于所述第一目标眼的可视范围内的部分;将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中位于所述第二目标眼的可视范围内的区域;根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像位于所述第二目标眼的可视范围内的部分;根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
- 根据权利要求1所述的方法,其中,所述根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,包括:根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
- 根据权利要求1或2所述的方法,其中,所述根据所述目标虚拟区域以 及所述第二目标眼的位置确定第二目标虚像,包括:根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼在所述三维环境中的可视范围;将所述第二虚像位于所述第二目标眼在所述三维环境中的可视范围内的部分确定为所述第二目标虚像。
- 根据权利要求1所述的方法,其中,所述根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,包括:根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
- 根据权利要求4所述的方法,其中,所述根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域,包括:获取所述第一目标虚像在所述第一显示屏显示的图像中的第一对应区域,以及所述第二目标虚像在所述第二显示屏显示的图像中的第二对应区域;以及将所述第一对应区域确定为所述第一注视区域,并将所述第二对应区域确定为所述第二注视区域。
- 一种注视区域确定装置,其中,所述注视区域确定装置适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:获取模块,配置为获取第一目标眼在所述第一显示屏上的注视点;第一确定模块,配置为根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境位于所 述第一目标眼的可视范围内的区域;第二确定模块,配置为根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像中,位于所述第一目标眼的可视范围内的虚像;第三确定模块,配置为将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中,位于所述第二目标眼的可视范围内的区域,所述第二目标眼为左眼和右眼中除所述第一目标眼之外的眼睛;第四确定模块,配置为根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像中,位于所述第二目标眼的可视范围内的虚像;第五确定模块,配置为根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
- 根据权利要求6所述的注视区域确定装置,其中,所述第一确定模块配置为:根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
- 根据权利要求6或7所述的注视区域确定装置,其中,所述第四确定模块配置为:根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼的可视范围;以及将所述第二虚像位于所述第二目标眼的可视范围内的部分确定为所述第二目标虚像。
- 根据权利要求6至8中任何一项所述的注视区域确定装置,其中,所述第二确定模块配置为:根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;以及将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
- 一种可穿戴设备,包括:图像采集组件、第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜;所述可穿戴设备还包括权利要求6至9中任何一项所述的注视区域确定装置。
- 一种注视区域确定装置,所述注视区域确定装置适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:处理器;存储器,所述存储器配置为存储所述处理器可执行的指令,当所述指令被所述处理器执行时,所述处理器被配置为执行权利要求1至5中任何一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实施权利要求1至5中任何一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910333506.7 | 2019-04-24 | ||
CN201910333506.7A CN109901290B (zh) | 2019-04-24 | 2019-04-24 | 注视区域的确定方法、装置及可穿戴设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020215960A1 true WO2020215960A1 (zh) | 2020-10-29 |
Family
ID=66956250
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/080961 WO2020215960A1 (zh) | 2019-04-24 | 2020-03-24 | 注视区域的确定方法、装置及可穿戴设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109901290B (zh) |
WO (1) | WO2020215960A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022160933A1 (en) * | 2021-01-26 | 2022-08-04 | Huawei Technologies Co.,Ltd. | Systems and methods for gaze prediction on touch-enabled devices using touch interactions |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109901290B (zh) * | 2019-04-24 | 2021-05-14 | 京东方科技集团股份有限公司 | 注视区域的确定方法、装置及可穿戴设备 |
CN110347265A (zh) * | 2019-07-22 | 2019-10-18 | 北京七鑫易维科技有限公司 | 渲染图像的方法及装置 |
CN113467619B (zh) * | 2021-07-21 | 2023-07-14 | 腾讯科技(深圳)有限公司 | 画面显示方法、装置和存储介质及电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105432078A (zh) * | 2013-02-19 | 2016-03-23 | 瑞尔D股份有限公司 | 双目注视成像方法和设备 |
JP2017107359A (ja) * | 2015-12-09 | 2017-06-15 | Kddi株式会社 | 眼鏡状の光学シースルー型の両眼のディスプレイにオブジェクトを表示する画像表示装置、プログラム及び方法 |
US20170358136A1 (en) * | 2016-06-10 | 2017-12-14 | Oculus Vr, Llc | Focus adjusting virtual reality headset |
CN107797280A (zh) * | 2016-08-31 | 2018-03-13 | 乐金显示有限公司 | 个人沉浸式显示装置及其驱动方法 |
CN108369744A (zh) * | 2018-02-12 | 2018-08-03 | 香港应用科技研究院有限公司 | 通过双目单应性映射的3d注视点检测 |
CN109087260A (zh) * | 2018-08-01 | 2018-12-25 | 北京七鑫易维信息技术有限公司 | 一种图像处理方法及装置 |
CN109901290A (zh) * | 2019-04-24 | 2019-06-18 | 京东方科技集团股份有限公司 | 注视区域的确定方法、装置及可穿戴设备 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9398229B2 (en) * | 2012-06-18 | 2016-07-19 | Microsoft Technology Licensing, Llc | Selective illumination of a region within a field of view |
US9766459B2 (en) * | 2014-04-25 | 2017-09-19 | Microsoft Technology Licensing, Llc | Display devices with dimming panels |
CN105425399B (zh) * | 2016-01-15 | 2017-11-28 | 中意工业设计(湖南)有限责任公司 | 一种根据人眼视觉特点的头戴设备用户界面呈现方法 |
US20190018485A1 (en) * | 2017-07-17 | 2019-01-17 | Thalmic Labs Inc. | Dynamic calibration systems and methods for wearable heads-up displays |
CN109031667B (zh) * | 2018-09-01 | 2020-11-03 | 哈尔滨工程大学 | 一种虚拟现实眼镜图像显示区域横向边界定位方法 |
-
2019
- 2019-04-24 CN CN201910333506.7A patent/CN109901290B/zh active Active
-
2020
- 2020-03-24 WO PCT/CN2020/080961 patent/WO2020215960A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105432078A (zh) * | 2013-02-19 | 2016-03-23 | 瑞尔D股份有限公司 | 双目注视成像方法和设备 |
JP2017107359A (ja) * | 2015-12-09 | 2017-06-15 | Kddi株式会社 | 眼鏡状の光学シースルー型の両眼のディスプレイにオブジェクトを表示する画像表示装置、プログラム及び方法 |
US20170358136A1 (en) * | 2016-06-10 | 2017-12-14 | Oculus Vr, Llc | Focus adjusting virtual reality headset |
CN107797280A (zh) * | 2016-08-31 | 2018-03-13 | 乐金显示有限公司 | 个人沉浸式显示装置及其驱动方法 |
CN108369744A (zh) * | 2018-02-12 | 2018-08-03 | 香港应用科技研究院有限公司 | 通过双目单应性映射的3d注视点检测 |
CN109087260A (zh) * | 2018-08-01 | 2018-12-25 | 北京七鑫易维信息技术有限公司 | 一种图像处理方法及装置 |
CN109901290A (zh) * | 2019-04-24 | 2019-06-18 | 京东方科技集团股份有限公司 | 注视区域的确定方法、装置及可穿戴设备 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022160933A1 (en) * | 2021-01-26 | 2022-08-04 | Huawei Technologies Co.,Ltd. | Systems and methods for gaze prediction on touch-enabled devices using touch interactions |
US11474598B2 (en) | 2021-01-26 | 2022-10-18 | Huawei Technologies Co., Ltd. | Systems and methods for gaze prediction on touch-enabled devices using touch interactions |
Also Published As
Publication number | Publication date |
---|---|
CN109901290B (zh) | 2021-05-14 |
CN109901290A (zh) | 2019-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6759371B2 (ja) | 3dプレノプティックビデオ画像を作成するためのシステムおよび方法 | |
US11632537B2 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
WO2020215960A1 (zh) | 注视区域的确定方法、装置及可穿戴设备 | |
CN109074681B (zh) | 信息处理装置、信息处理方法和程序 | |
CN107376349B (zh) | 被遮挡的虚拟图像显示 | |
JP6860488B2 (ja) | 複合現実システム | |
CN106327584B (zh) | 一种用于虚拟现实设备的图像处理方法及装置 | |
US10715791B2 (en) | Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes | |
US9076033B1 (en) | Hand-triggered head-mounted photography | |
WO2016091030A1 (zh) | 透过式增强现实近眼显示器 | |
WO2018076202A1 (zh) | 能够进行人眼追踪的头戴式可视设备及人眼追踪方法 | |
US20170076475A1 (en) | Display Control Method and Display Control Apparatus | |
US9123171B1 (en) | Enhancing the coupled zone of a stereoscopic display | |
KR101788452B1 (ko) | 시선 인식을 이용하는 콘텐츠 재생 장치 및 방법 | |
WO2019041614A1 (zh) | 沉浸式虚拟现实头戴显示装置和沉浸式虚拟现实显示方法 | |
US11749141B2 (en) | Information processing apparatus, information processing method, and recording medium | |
CN103517060A (zh) | 一种终端设备的显示控制方法及装置 | |
JP2023515205A (ja) | 表示方法、装置、端末機器及びコンピュータプログラム | |
CN107065164B (zh) | 图像展示方法及装置 | |
CN114371779B (zh) | 一种视线深度引导的视觉增强方法 | |
JP2018088604A (ja) | 画像表示装置、画像表示方法、システム | |
US10083675B2 (en) | Display control method and display control apparatus | |
CN114581514A (zh) | 一种双眼注视点的确定方法和电子设备 | |
US20230214011A1 (en) | Method and system for determining a current gaze direction | |
US20230239456A1 (en) | Display system with machine learning (ml) based stereoscopic view synthesis over a wide field of view |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20796260 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20796260 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20796260 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/05/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20796260 Country of ref document: EP Kind code of ref document: A1 |