WO2020215960A1 - 注视区域的确定方法、装置及可穿戴设备 - Google Patents

注视区域的确定方法、装置及可穿戴设备 Download PDF

Info

Publication number
WO2020215960A1
WO2020215960A1 PCT/CN2020/080961 CN2020080961W WO2020215960A1 WO 2020215960 A1 WO2020215960 A1 WO 2020215960A1 CN 2020080961 W CN2020080961 W CN 2020080961W WO 2020215960 A1 WO2020215960 A1 WO 2020215960A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
area
eye
virtual
display screen
Prior art date
Application number
PCT/CN2020/080961
Other languages
English (en)
French (fr)
Inventor
李文宇
苗京花
孙玉坤
王雪丰
彭金豹
李治富
赵斌
李茜
范清文
索健文
刘亚丽
栗可
陈丽莉
张�浩
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2020215960A1 publication Critical patent/WO2020215960A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the embodiments of the present disclosure relate to a method, device and wearable device for determining a gaze area.
  • VR Virtual reality
  • Three-dimensional environment ie, virtual scene
  • users with a sense of immersion through the three-dimensional environment.
  • wearable devices using VR technology can target the images displayed on their displays
  • the part of the image that the user looks at is presented as a high-definition image, while other image parts are presented as non-high-definition images.
  • the related art provides a method for determining the gaze area, which can be used to determine the part of the image that the user is gazing at.
  • the gaze point of the user's left eye is determined according to the gaze point information of the user's left eye and the gaze point information of the right eye.
  • Area and right eye fixation area is provided.
  • the wearable device includes a first display component and a second display component.
  • the first display component includes a first display screen and A first lens located on the light-emitting side of the first display screen
  • the second display assembly includes a second display screen and a second lens located on the light-emitting side of the second display screen
  • the method includes: obtaining a first target eye The gaze point on the first display screen; a target virtual area is determined according to the gaze point and the angle of view of the first target eye, the target virtual area being in the three-dimensional environment presented by the wearable device
  • An area located within the visible range of the first target eye; a first virtual image of the target is determined according to the gaze point and the angle of view of the first target eye, the first virtual image of the target being the first display screen
  • the first virtual image of the displayed image formed by the first lens is located within the visible range of the first target eye; the target virtual area is determined to be located in the three
  • the determining the target virtual area according to the gaze point and the field angle of the first target eye includes: according to the gaze point and the field angle of the first target eye Determining the visible range of the first target eye; determining an area in the three-dimensional environment within the visible range of the first target eye as the target virtual area.
  • determining the second target virtual image according to the target virtual area and the position of the second target eye includes: determining the second target virtual image according to the target virtual area and the position of the second target eye The visible range of the second target eye in the three-dimensional environment; determining the portion of the second virtual image that is located within the visible range of the second target eye in the three-dimensional environment as the second target virtual image.
  • the determining the first virtual image of the target according to the gaze point and the angle of view of the first target eye includes: according to the position of the first target eye, the gaze point, and The field angle of the first target eye determines the visible range of the first target eye; the part of the first virtual image located in the visible range of the first target eye is determined as the first target virtual image .
  • the first gaze area of the first target eye in the image displayed on the first display screen is determined according to the first virtual image of the target and the second virtual image of the target
  • the The second gaze area of the second target eye in the image displayed on the second display screen includes: acquiring a first corresponding area of the first target virtual image in the image displayed on the first display screen, and The second corresponding area of the second target virtual image in the image displayed on the second display screen; and determining the first corresponding area as the first gaze area, and determining the second corresponding area as the The second gaze area.
  • Various embodiments of the present disclosure provide an apparatus for determining a gaze area, which is suitable for a wearable device including a first display component and a second display component, and the first display component includes a first display screen And a first lens located on the light exit side of the first display screen, the second display assembly includes a second display screen and a second lens located on the light exit side of the second display screen, and the gaze area determining device includes:
  • An acquiring module configured to acquire the gaze point of the first target eye on the first display screen
  • the first determining module is configured to determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is a three-dimensional environment presented by the wearable device located in the first target The area within the visual range of the eye;
  • the second determining module is configured to determine a first virtual image of the target according to the gaze point and the angle of view of the first target eye, where the virtual image of the first target is an image displayed on the first display screen passing through the first In the first virtual image formed by the lens, a virtual image located within the visible range of the first target eye;
  • the third determining module is configured to determine the target virtual area as an area within the visible range of the second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is left Eyes other than the first target eye among the eyes and the right eye;
  • the fourth determining module is configured to determine a second virtual target image according to the target virtual area and the position of the second target eye, where the second virtual target image is an image displayed on the second display screen through the second lens Among the formed second virtual images, the virtual images located within the visible range of the second target eye;
  • the fifth determining module is configured to determine the first gaze area of the first target eye in the image displayed on the first display screen according to the first virtual image of the target and the second virtual image of the target, and the second The second gaze area of the second target eye in the image displayed on the second display screen.
  • the first determining module is configured to: determine the visual range of the first target eye according to the gaze point and the angle of view of the first target eye; An area within the visual range of the first target eye in the environment is determined as the target virtual area.
  • the fourth determining module is configured to: determine the visible range of the second target eye according to the target virtual area and the position of the second target eye; and The part of the two virtual images located within the visible range of the second target eye is determined to be the second target virtual image.
  • the second determining module is configured to determine the first target eye according to the position of the first target eye, the gaze point, and the angle of view of the first target eye Visible range;
  • the part of the first virtual image located within the visible range of the first target eye is determined as the first target virtual image.
  • a wearable device including: an image acquisition component, a first display component, and a second display component.
  • the first display component includes a first display screen and is located on the first display screen.
  • the first lens on the light exit side, the second display assembly includes a second display screen and a second lens on the light exit side of the second display screen; the wearable device also includes any one of the above-mentioned gaze area determining devices.
  • the device for determining a gaze area is suitable for a wearable device.
  • the wearable device includes a first display component and a second display component.
  • the component includes a first display screen and a first lens located on the light-emitting side of the first display screen, and the second display component includes a second display screen and a second lens located on the light-emitting side of the second display screen.
  • the region determining device includes: a processor; a memory, the memory is configured to store instructions executable by the processor, and when the instructions are executed by the processor, the processor is configured to perform any of the above Area determination method.
  • Various embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, any one of the above-mentioned methods for determining a gaze area is implemented.
  • FIG. 1 is a schematic diagram of a left-eye high-definition image and a right-eye high-definition image determined by a method for determining a gaze area in related technologies;
  • Fig. 2 is a schematic structural diagram of a wearable device according to an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a human eye viewing an image in a display screen through a lens according to an embodiment of the present disclosure
  • FIG. 4 is a method flowchart of a method for determining a gaze area provided by an embodiment of the present disclosure
  • FIG. 5 is a method flowchart of another method for determining a gaze area provided by an embodiment of the present disclosure
  • Fig. 6 is a flow chart of a method for determining a target virtual area according to the gaze point and the field angle of the first target eye according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a visual range of a first target eye according to an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of a method for determining a virtual image of a first target according to the gaze point and the angle of view of the first target eye according to an embodiment of the present disclosure
  • FIG. 9 is a flowchart of a method for determining a second target virtual image according to the target virtual area and the position of the second target eye according to an embodiment of the present disclosure
  • FIG. 10 is a schematic diagram of determining a gaze area provided by an embodiment of the present disclosure.
  • FIG. 11 is a block diagram of a device for determining a gaze area provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a wearable device provided by an embodiment of the present disclosure.
  • VR technology is a technology that uses wearable devices to close people's vision and even hearing from the outside world to guide users to produce a sense of being in a virtual three-dimensional environment.
  • the display principle is that the display screens corresponding to the left and right eyes respectively display the images for the left and right eyes. Due to the parallax of the human eyes, the brain produces a close-to-real three-dimensional effect after obtaining images with differences through the human eyes.
  • VR technology is usually implemented by a VR system.
  • the VR system may include a wearable device and a VR host, where the VR host may be integrated in the wearable device, or an external device that can be wired or wirelessly connected to the wearable device.
  • the VR host is used to render the image and send the rendered image to the wearable device, and the wearable device is used to receive and display the rendered image.
  • Eye tracking (English: Eye Tracking), also known as eye tracking, is a way to analyze the eye movement information of the human eye by collecting the human eye image of the human eye, and based on the eye movement information, determine that the human eye is on the display screen. The focus of the technology. Further, in the eye tracking technology, according to the determined gaze point of the human eye on the display screen, the gaze area of the human eye on the display screen can be determined.
  • SmartView is a technical solution that combines VR technology with Eye Tracking technology to achieve high-definition VR technology.
  • the technical solution includes: first accurately track the user's gaze area on the display screen through Eye Tracking technology, and then only perform high-definition rendering of the gaze area, and perform non-high-definition rendering of other areas, and integrated circuits (English: Integrated Circuit, abbreviated as : IC) can process rendered non-high definition images (also called low definition images or low definition images) into high resolution images and display them on the display.
  • the display screen may be a liquid crystal display (English: Liquid Crystal Display, abbreviation: LCD) screen or an organic light emitting diode (English: Organic Light-Emitting Diode, abbreviation: OLED) display screen, etc.
  • Unity also known as Unity Engine, is a multi-platform comprehensive game development tool developed by Unity Technologies and a fully integrated professional game engine. Unity can be used to develop VR technology.
  • the Eye Tracking technology requires two cameras to be installed in the wearable device.
  • the two cameras can separately collect the left and right eye images (the person The eye image is also called the gaze point image, etc.), and the gaze point coordinates are calculated by the VR host based on the human eye image.
  • the two cameras provided in the wearable device of the VR system greatly increase the weight and cost of the wearable device, which is not conducive to the general promotion of the VR system.
  • this technical solution does not take into account the human visual characteristics: because the left eye and the right eye are located at different positions in space, the viewing angles of the left eye and the right eye when viewing the object are different, which results in the same object in the left eye field of view and The position in the field of view of the right eye is different, and the images seen by the two eyes are actually not completely overlapped. Therefore, if the left-eye fixation point coordinates and the right-eye fixation point coordinates are calculated from the left-eye image and the right-eye image respectively, the left-eye fixation point coordinates and the right-eye fixation point coordinates do not actually coincide in the display screen. If the right-eye fixation point area of the left-eye fixation point area is further determined according to the fixation point coordinates of the left and right eyes, it is difficult for the left-eye fixation point area and the right-eye fixation point area to completely overlap.
  • FIG. 1 shows a left-eye high-definition image 11 and a right-eye high-definition image 12 obtained after high-definition rendering of the fixation point regions of the left and right eyes respectively. It can be seen from FIG. 1 that only the middle part of the left-eye high-definition image 11 and the right-eye high-definition image 12 overlap.
  • the visual experience presented to the user is that the user can see the high-definition image area 13, the high-definition image area 14, and the high-definition image area 15 in the visual field of the left and right eyes.
  • the high-definition image area 13 is a high-definition image area that can be seen by both left and right eyes
  • the high-definition image area 14 is a high-definition image area that can be seen only by the left eye
  • the high-definition image area 15 is a high-definition image area that can be seen only by the right eye.
  • the high-definition image area 14 and the high-definition image area 15 are only the high-definition image areas that can be seen by one of the eyes, when the user looks at the display screen at the same time, the user’s viewing experience will be affected.
  • the image areas 15 and the high-definition image area 14 and the high-definition image area 15 will present a relatively obvious boundary line, which further affects the user's visual experience.
  • Various embodiments of the present disclosure provide a method for determining a gaze area, which can ensure that the determined gaze areas of the left and right eyes overlap, so that the user's left and right eyes can view the fully overlapping high-definition images, which effectively improves the user experience.
  • the wearable device 20 may include a first display component 21 and a second display component 22.
  • the first display component 21 includes a first display screen 211 and a first display device located on the light emitting side of the first display screen 211.
  • the second display assembly 22 includes a second display screen 221 and a second lens 222 located on the light emitting side of the second display screen 221.
  • the lenses ie, the first lens 212 and the second lens 222
  • the human eye observes the first virtual image 213 corresponding to the image displayed on the first display screen 211 through the first lens 212, and the first virtual image 213 is usually the first display
  • the image displayed on the screen 211 is an enlarged image.
  • the wearable device may further include an image acquisition component, the image acquisition component may be an eye tracking camera, the eye tracking camera is integrated around at least one of the first display screen and the second display screen of the wearable device, It is used to collect the human eye image corresponding to the at least one display screen in real time, and send it to the VR host, and the VR host processes the human eye image to determine the gaze point coordinates of the human eye on the display screen.
  • the device for determining the gaze area acquires the coordinates of the gaze point.
  • the wearable device also includes a gaze area determining device, the gaze area determining device can be integrated into the wearable device through software or hardware, or integrated into the VR host, the gaze area determining device can be configured to execute the following gaze area Method of determining.
  • FIG. 4 shows a flowchart of a method for determining a gaze area provided by an embodiment of the present disclosure.
  • the method may include the following steps:
  • Step S201 Obtain the gaze point of the first target eye on the first display screen, where the first target eye is the left eye or the right eye.
  • Step S202 Determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is the three-dimensional environment presented by the wearable device within the visible range of the first target eye area.
  • Step S203 Determine a first virtual image of the target according to the gaze point and the angle of view of the first target eye, where the first virtual image of the target is formed by the image displayed on the first display screen through a first lens The part of the first virtual image located within the visible range of the first target eye.
  • Step S204 Determine the target virtual area as the area within the visible range of the second target eye in the three-dimensional environment presented by the wearable device, where the second target eye is divided by the left eye and the right eye. Eyes other than the first target eye.
  • Step S205 Determine a second virtual image of the second target according to the target virtual area and the position of the second target eye, where the second virtual image of the second virtual image formed by the image displayed on the second display screen passes through the second lens. Describe the part within the visible range of the second target eye.
  • Step S206 According to the first virtual image of the target and the second virtual image of the second target, determine the first gaze area of the first target eye in the image displayed on the first display screen, and the location of the second target eye The second gaze area in the image displayed on the second display screen.
  • the gaze point of the first target eye on the first display screen and the field angle of the first target eye are used to determine The target virtual area, and determining the target virtual area as an area within the visual range of the second target eye, so as to determine the visual range of the second target eye, thereby determining the The first virtual image seen by the first target eye and the second virtual image seen by the second target eye can determine that the first target eye is in the image displayed on the first display screen The first gaze area and the second gaze area of the second target eye in the image displayed on the second display screen.
  • the first gaze area of the first target eye on the first display screen and the second gaze area of the second target eye on the second display screen are determined by the same target virtual area Therefore, the first gaze area and the second gaze area can be completely overlapped, which effectively improves the display effect of the image in the wearable device and enhances the user's visual experience.
  • FIG. 5 shows a flow chart of a method for determining a gaze area according to another embodiment of the present disclosure.
  • the method for determining a gaze area can be executed by a device for determining a gaze area and is suitable for wearable devices.
  • the structure of the wearable device can be referred to The wearable device shown in Figure 2 above.
  • the method for determining the gaze area may include the following steps:
  • Step S301 Obtain the gaze point of the first target eye on the first display screen.
  • an eye tracking camera may be arranged around the first display screen of the wearable device, and the eye tracking camera may collect the eye image of the corresponding first target eye in real time, and the VR host Determine the gaze point coordinates of the first target eye on the first display screen according to the human eye image.
  • the gaze area determining device obtains the gaze point coordinates.
  • Step S302 Determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is the visible range of the three-dimensional environment presented by the wearable device and located in the first target eye Within the area.
  • determining the target virtual area according to the gaze point and the angle of view of the first target eye may include:
  • Step S3021 Determine the visible range of the first target eye according to the gaze point and the angle of view of the first target eye.
  • the viewing angle of the first target eye may be composed of a horizontal viewing angle and a vertical viewing angle, and the area located within the horizontal viewing angle and the vertical viewing angle of the first target eye is the viewing range of the first target eye .
  • the actual viewing angle of the human eye is limited. Generally speaking, the maximum horizontal viewing angle of the human eye is 188 degrees, and the maximum vertical viewing angle is 150 degrees. Normally, no matter how the human eye rotates, the field angle of the human eye remains unchanged. According to the gaze point of the first target eye and the horizontal and vertical field angles of the first target eye, you can Determine the visible range of the first target eye.
  • the viewing angles of the left eye and the right eye may be different. Considering individual differences, the viewing angles of different people may also be different, which is not limited in the embodiments of the present disclosure.
  • Figure 7 schematically shows the gaze point G of the first target eye O, the horizontal field angle a, the vertical field angle b of the first target eye, and the visual range of the first target eye (i.e. point O, point P , Point Q, point M, and point N).
  • Step S3022 Determine an area within the visible range of the first target eye in the three-dimensional environment as the target virtual area.
  • determining the target virtual area may include the following steps:
  • Step A1 At least two rays are emitted from the position of the first target eye (for ease of description, the position of the first target eye is regarded as a point), and the at least two rays are respectively along the angle of view of the first target eye
  • the boundary shoots out.
  • the Unity engine can emit at least two rays from the position of the first target eye (the ray is a virtual ray), that is, draw at least two rays from the position of the first target eye as the starting point, and the at least two rays can be They are emitted along the boundary of the field of view of the first target eye.
  • a first virtual camera and a second virtual camera are respectively provided at the position of the first target eye and the position of the second target eye.
  • the images seen by the user's left and right eyes through the first display screen and the second display screen in the wearable device come from the images taken by the first virtual camera and the second virtual camera, respectively.
  • the position of the first target eye is the position of the first virtual camera in the wearable device, in the embodiments of the present disclosure, the position of the target eye can be characterized by the position of the virtual camera, and the Unity engine can start from the first virtual camera. At least two rays are emitted from the position of the camera.
  • Step A2 Obtain at least two points where at least two rays come into contact with the virtual area, and use the at least two points as calibration points respectively.
  • the at least two rays will come into contact with the three-dimensional environment presented by the wearable device, that is, the virtual area to generate contact points.
  • the Unity engine when a ray with physical properties collides with a collider on the surface of a virtual object, the Unity engine can determine the coordinates of the collision point, that is, the coordinates of the surface of the virtual object.
  • Step A3 Determine the area enclosed by the at least two calibration points in the virtual area as the target virtual area.
  • the geometric figure of the target virtual image area can be determined in advance, the at least two calibration points are connected according to the geometric figure, and the area enclosed by the connecting line is determined as the target virtual area. If there are only two calibration points, if the two coordinates of the two calibration points on the surface of the virtual object are relatively different, it can be determined that the two calibration points are close to the OQ and ON, as shown in Figure 7, respectively.
  • the target virtual area can be determined according to the coordinates of the two calibration points, according to the horizontal field of view and the vertical field of view.
  • object recognition can also be performed on the area enclosed by the line to extract valid objects in the enclosed area, while ignoring invalid objects in the enclosed area (For example, sky and other background), determining the area where the effective object is located as the target virtual area.
  • Step S303 Determine the first virtual image of the first target according to the gaze point and the angle of view of the first target eye.
  • the first virtual image of the first target is the first virtual image formed by the image displayed on the first display screen through the first lens and located in the first target eye. The part within the visible range.
  • the left and right eyes see the first virtual image and the second virtual image through the first lens and the second lens.
  • the two eyes simultaneously obtain the first virtual image and the second virtual image. Formed a three-dimensional image with depth.
  • the first virtual image seen by the first target eye and the second virtual image seen by the second target eye need to be re-identified.
  • the re-identified first virtual image and the second virtual image may be transparent.
  • determining the first virtual image of the target according to the gaze point and the angle of view of the first target eye may include:
  • Step S3031 Determine the visible range of the first target eye according to the position, the gaze point, and the angle of view of the first target eye.
  • step S303 For the implementation of step S3031, reference may be made to the related description of step S3021 above, and details are not repeated in the embodiment of the present disclosure.
  • Step S3032 Determine the part of the first virtual image located in the visible range of the first target eye as the first target virtual image.
  • determining the first target virtual image may include the following steps:
  • Step B1 At least two rays are emitted from the position of the first target eye, and the at least two rays are respectively emitted along the boundary of the field of view of the first target eye.
  • step B1 reference may be made to the related description of the above step A1, and the details are not repeated here in the embodiment of the present disclosure.
  • Step B1 is to characterize the visible range of the first target eye by means of rays, so as to accurately determine the first virtual image of the target located in the first virtual image.
  • Step B2 Acquire at least two first contact points where the at least two rays contact the first virtual image.
  • the at least two rays will contact the first virtual image to form at least two first contact points.
  • Step B3 Determine the area enclosed by the at least two first contact points as the first target virtual image.
  • the first virtual image of the target can be determined according to a predetermined geometric figure, or object recognition can be performed on the enclosed area to determine the recognized object as the first target Virtual image.
  • Step S304 Determine the target virtual area as an area within the range of the angle of view of the second target eye in the three-dimensional environment presented by the wearable device.
  • the target virtual area determined according to the gaze point of the first target eye and the angle of view of the first target eye is determined to be within the range of the angle of view of the second target eye
  • the area can ensure that the fixed gaze areas of the eyes overlap.
  • Step S305 Determine a second target virtual image according to the target virtual area and the position of the second target eye.
  • the second target virtual image is the image displayed on the second display screen and formed by the second lens.
  • the second virtual image is located on the second target. The part of the eye's field of view.
  • determining the second virtual image of the target according to the position of the target virtual area and the second target eye may include:
  • Step S3051 Determine the visible range of the second target eye in the three-dimensional environment according to the target virtual area and the position of the second target eye.
  • At least two rays are emitted from the position of the second target eye, and the at least two rays are respectively connected to at least two calibration points surrounding the target virtual area.
  • the position of the second target eye and the space area enclosed by the at least two calibration points are the visible range of the second target eye in the three-dimensional environment.
  • the visible range of the second target eye in the three-dimensional environment is a partial spatial area within the visible range of the second target eye.
  • the Unity engine can control the position of the second target eye to emit at least two rays, that is, draw at least two rays with the position of the second target eye as the starting point and at least two calibration points as the ending point.
  • the position of the second target eye is the position of the second virtual camera in the wearable device.
  • Step S3052 Determine the part of the second virtual image that is located within the visible range of the second target eye in the three-dimensional environment as the second target virtual image.
  • determining the second target virtual image in step S3052 may include the following steps:
  • Step C1 Acquire at least two second contact points where at least two rays contact the second virtual image. In the extending direction of the at least two rays, the at least two rays will contact the second dashed line to form at least two second contact points.
  • Step C2 Determine the area enclosed by the at least two first contact points as the second target virtual image.
  • the second target virtual image may be determined according to a predetermined geometric figure. If object recognition is performed on the area enclosed by the at least two first contact points in step B3, and the recognized object is determined to be the first target virtual image, then in step C2, the same applies to the at least two first contact points.
  • the area enclosed by a contact point performs object recognition, and the recognized object is determined as the second target virtual image.
  • step B3 and step C2 the same algorithm and the same algorithm parameters should be used to identify the enclosed area to ensure the recognition The objects are consistent.
  • Step S306 Obtain a first corresponding area of the first virtual image of the first target in the image displayed on the first display screen, and a second corresponding area of the second virtual image of the second target in the image displayed on the second display screen .
  • the at least two first contact points and at least two second contact points are respectively converted into at least two first image points in the image displayed on the first display screen and at least two of the images displayed on the second display screen Second image point.
  • the target virtual image is distorted compared to the target image.
  • the corresponding relationship between the virtual image coordinates and the image coordinates is recorded in the anti-distortion grid.
  • the at least two first contact points and the at least two second contact points are both virtual image coordinates located in a virtual image
  • the at least two first image points and the at least two second image points are both Are the image coordinates in the image displayed on the screen (the screen coordinates correspond to the image coordinates displayed on the screen), therefore, based on the correspondence between the virtual image coordinates and the image coordinates in the anti-distortion grid, the at least two first contacts can be The points are converted into at least two first image points in the image displayed on the first display screen and the at least two second contact points are converted into at least two second image points in the image displayed on the second display screen.
  • the first corresponding area is determined according to the at least two first image points.
  • the area enclosed by the at least two first image points is determined as the first corresponding area, or the enclosed area may be subject to object Recognition, the recognized object is determined as the first corresponding area;
  • the second corresponding area is determined according to at least two second image points, and optionally, the area enclosed by the at least two second image points is determined as the second Corresponding area, or object recognition can be performed on the enclosed area, and the recognized object is determined as the second corresponding area.
  • Step S307 Determine the first corresponding area as the first gaze area.
  • Step S308 Determine the second corresponding area as the second gaze area.
  • the gaze point of the first target eye on the first display screen and the field angle of the first target eye are used to determine The target virtual area, and the target virtual area is determined as an area within the visual range of the second target eye, so as to determine the visual range of the second target eye, and then the view of the first target eye can be determined
  • the first virtual image obtained and the second virtual image seen by the second target eye can thereby determine the first gaze of the first target eye in the image displayed on the first display screen Area and the second gaze area of the second target eye in the image displayed on the second display screen.
  • first gaze area of the first target eye on the first display screen and the second gaze area of the second target eye on the second display screen are composed of the same target virtual area It is determined that the first gaze area and the second gaze area can be completely overlapped, which solves the problem that the gaze areas of the left and right eyes are difficult to completely overlap in the related art, resulting in poor image display effects in wearable devices, and effectively improves The display effect of the image in the wearable device improves the user's visual experience.
  • the gaze point of the first target eye on the display screen can be obtained through an eye tracking camera. Therefore, the embodiment of the present disclosure is applied In the wearable device of the provided method for determining the gaze area, only one eye tracking camera can be set. Compared with related technologies, eye tracking cameras need to be set for the left and right eyes separately, and the human eye images of the left and right eyes are collected separately. Analyze the gaze points of the left and right eyes to determine the gaze area of the left and right eyes.
  • the method for determining the gaze area provided by the embodiments of the present disclosure can effectively reduce the weight and cost of the wearable device, which is beneficial to the popularization of the wearable device.
  • step 307 and step 308 can be performed at the same time or step S308 can be performed first, and then step S307 can be performed.
  • step S303 and step S304 can be performed simultaneously or first.
  • Step 304 executes step 303 again.
  • the method for determining the fixation area includes the following steps:
  • Step S1 Obtain the gaze point S of the first target eye 213 on the first display screen 211.
  • Step S2 Determine the field angle ⁇ of the first target eye 213 according to the gaze point S.
  • the description is made by taking the viewing angle as the horizontal viewing angle as an example.
  • Step S3 Two rays are emitted from the position of the first target eye 213 along the boundary of the field of view ⁇ of the first target eye 213, and the two contact points where the two rays come into contact with the virtual area 23 are obtained, and the The two contact points are determined as a first calibration point S1 and a second calibration point S2, and the area enclosed by the first calibration point and the second calibration point in the virtual area 23 is determined as a target virtual area.
  • the description will be given by taking an example that the area between the line between the calibration point S1 and the calibration point S2 represents the target virtual area.
  • Step S4 Acquire the first contact point C'and the second contact point A'where the two rays emitted from the point where the position of the first target eye 213 is in contact with the first virtual image 214, respectively, according to the first contact point C'and the second contact point A'.
  • the second contact point A' determines the virtual image of the first target.
  • the area between the line of the first contact point C'and the second contact point A' represents the first target virtual image as an example for description.
  • Step S5 Determine the target virtual area as an area in which the three-dimensional environment presented by the wearable device is located within the field of view range of the field angle ⁇ of the second target eye 223.
  • the area between the line of the third contact point D'and the fourth contact point B' represents the second target virtual image as an example for description.
  • Step S7 Convert the first contact point C'to the first image point C in the image displayed on the first display screen, and convert the second contact point A'to the second image in the image displayed on the first display screen Point A, transform the third contact point D'into the third image point D in the image displayed on the second display screen, and transform the fourth contact point B'into the fourth image in the image displayed on the second display screen
  • the first gaze area is determined according to the first image point C and the second image point A
  • the second gaze area is determined according to the third image point D and the fourth image point B.
  • the first virtual image 214 and the second virtual image 224 are overlapped, but in order to facilitate the description of the method of determining the gaze area, the first virtual image 214 and the second virtual image 214
  • the two virtual images 224 are shown as not overlapping.
  • the calibration point S1 and the calibration point S2 used to represent the target virtual area, the gaze point S, etc. are all illustrative descriptions.
  • FIG. 11 shows a gaze area determining device 30 according to an embodiment of the present disclosure.
  • the gaze area determining device 30 can be applied to the wearable device shown in FIG. 2, and the gaze area determining device 30 includes:
  • the obtaining module 301 is configured to obtain the gaze point of the first target eye on the first display screen, where the first target eye is a left eye or a right eye;
  • the first determining module 302 is configured to determine a target virtual area according to the gaze point and the angle of view of the first target eye, where the target virtual area is a three-dimensional environment presented by the wearable device located in the first The area within the visual range of the target eye;
  • the second determining module 303 is configured to determine the first virtual image of the first target according to the gaze point and the angle of view of the first target eye, where the first virtual image is the image displayed on the first display The part of the first virtual image formed by the first lens that is located within the visible range of the first target eye;
  • the third determining module 304 is configured to determine the target virtual area as an area where the three-dimensional environment presented by the wearable device is located within the visual range of the second target eye;
  • the fourth determining module 305 is configured to determine a second virtual target image according to the target virtual area and the position of the second target eye, where the second virtual target image is the image displayed on the second display screen passing through the second The part of the second virtual image formed by the lens that is located within the visible range of the second target eye;
  • the fifth determining module 306 is configured to determine a first gaze area of the first target eye in the image displayed on the first display screen according to the first virtual image of the target and the second virtual image of the target, and the The second gaze area of the second target eye in the image displayed on the second display screen.
  • the target virtual area is determined by the gaze point of the first target eye on the first display screen and the angle of view of the first target eye, and the target virtual area is determined to be within the visible range of the second target eye
  • the area of the second target eye can be used to determine the visible range of the second target eye, and then the first virtual image seen by the first target eye and the second virtual image seen by the second target eye can be determined.
  • the first gaze area and the second gaze area can be accurately overlapped, which solves the problem that the gaze areas of the left and right eyes in the related technology are difficult to completely overlap and lead to wearable
  • the problem of poor image display effect in the device effectively improves the display effect of the image in the wearable device and improves the user's visual experience.
  • the first determining module 302 is configured to:
  • the fourth determining module 305 is configured to:
  • the part of the second virtual image located within the visible range of the second target eye is determined as the second target virtual image.
  • the second determining module 303 is configured to:
  • Determining the visible range of the first target eye according to the position of the first target eye, the gaze point, and the angle of view of the first target eye;
  • the part of the first virtual image located within the visible range of the first target eye is determined as the first target virtual image.
  • the target virtual area is determined by the gaze point of the first target eye on the first display screen and the angle of view of the first target eye, and the target virtual area is determined to be within the visible range of the second target eye
  • the area of the second target eye can be used to determine the visible range of the second target eye, and then the first virtual image seen by the first target eye and the second virtual image seen by the second target eye can be determined.
  • the first gaze area and the second gaze area can be accurately overlapped, which solves the problem that the gaze areas of the left and right eyes in the related technology are difficult to completely overlap and lead to wearable
  • the problem of poor image display effect in the device effectively improves the display effect of the image in the wearable device and improves the user's visual experience.
  • FIG. 12 shows a schematic structural diagram of a wearable device 20 according to another embodiment of the present disclosure.
  • the wearable device 20 includes a gaze area determining device 24, an image capture component 23, a first display component 21, and a second display component 22 .
  • the device 24 for determining the gaze area may be the device 30 for determining the gaze area shown in FIG. 10, and the image acquisition component 23, the first display component 21, and the second display component 22 can refer to the foregoing introduction, and the description of the embodiments of the present disclosure will not be repeated here. .
  • At least one embodiment of the present disclosure also provides an apparatus for determining a gaze area.
  • the apparatus for determining a gaze area is suitable for a wearable device.
  • the wearable device includes a first display component and a second display component.
  • the component includes a first display screen and a first lens located on the light-emitting side of the first display screen, and the second display component includes a second display screen and a second lens located on the light-emitting side of the second display screen.
  • the region determining device includes: a processor; a memory, the memory is configured to store instructions executable by the processor, and when the instructions are executed by the processor, the processor is configured to:
  • the target virtual area as an area within the visual range of the second target eye in the three-dimensional environment presented by the wearable device
  • a second virtual target image is determined according to the target virtual area and the position of the second target eye, where the second virtual target image is the second virtual image formed by the second lens through the image displayed on the second display screen. The part within the visible range of the second target eye;
  • the first virtual image of the target and the second virtual image of the second target determine the first gaze area of the first target eye in the image displayed on the first display screen, and the second target eye in the first The second gaze area in the image displayed on the second display screen.
  • the processor when the processor is configured to determine the target virtual area according to the gaze point and the angle of view of the first target eye, the processor is configured to:
  • the processor when the processor is configured to determine a second target virtual image according to the target virtual area and the position of the second target eye, the processor is configured to: virtualize according to the target The area and the position of the second target eye determine the visible range of the second target eye in the three-dimensional environment; and the second virtual image is located in the three-dimensional environment of the second target eye. The part within the viewing range is determined as the second target virtual image.
  • the processor when the processor is configured to determine the first target virtual image according to the gaze point and the angle of view of the first target eye, the processor is configured to
  • the processor when the processor is configured to determine the position of the first target eye in the image displayed on the first display screen according to the first target virtual image and the second target virtual image
  • the processor is configured to: acquire the first target virtual image in the first display A first corresponding area in the image displayed on the screen, and a second corresponding area in the image displayed on the second display screen of the second virtual image of the second target; determining the first corresponding area as the first gaze area And determining the second corresponding area as the second gaze area.
  • At least one embodiment of the present disclosure also provides a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, any one of the above methods is implemented.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开公开了一种注视区域的确定方法、装置及可穿戴设备,属于电子技术应用领域。所述方法通过第一目标眼在第一显示屏上的注视点以及第一目标眼的视场角确定目标虚拟区间,并将该目标虚拟区域确定为第二目标眼的视场角的范围内的区域,以此来确定第二目标眼的视场角,进而可以确定出第一目标眼看到的第一虚像以及第二目标眼看到的第二虚像,由此能够确定出第一目标眼在第一显示屏显示的图像中的第一注视区域以及第二目标眼在第二显示屏显示的图像中的第二注视区域。

Description

注视区域的确定方法、装置及可穿戴设备
本申请要求于2019年4月24日提交的申请号为201910333506.7、发明名称为“注视区域的确定方法、装置及可穿戴设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开的实施例涉及一种注视区域的确定方法、装置及可穿戴设备。
背景技术
虚拟现实(英文:Virtual Reality;简称:VR)技术是近年来备受市场青睐的技术。VR技术能够构建一个三维环境(即虚拟场景),通过该三维环境为用户提供沉浸感。
目前,用户对呈现在该三维环境中的图像的清晰度的要求越来越高,为了避免高清图像的传输压力,采用VR技术的可穿戴设备可以有针对性地将其显示屏所显示的图像中用户注视的图像部分呈现为高清图像,而将其他图像部分呈现为非高清图像。相关技术中提供了一种注视区域的确定方法,可以用于确定用户注视的图像部分,该方法中根据用户左眼的注视点信息以及右眼的注视点信息分别确定出该用户的左眼注视区域和右眼注视区域。
但是,由于同一物体在左右眼的视野范围的位置存在差异,导致相关技术中分别确定的左眼注视区域和右眼注视区域难以完全重合,进而导致基于该左眼注视区域确定的左眼高清图像和基于右眼注视区域确定的右眼高清图像也难以完全重合,影响了可穿戴设备中图像的显示效果。
发明内容
本公开的多个实施例提供了一种注视区域确定方法,适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述方法包括:获 取第一目标眼在所述第一显示屏上的注视点;根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境中位于所述第一目标眼的可视范围内的区域;根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像位于所述第一目标眼的可视范围内的部分;将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中位于所述第二目标眼的可视范围内的区域;根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像位于所述第二目标眼的可视范围内的部分;根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
在本公开的一些实施例中,所述根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,包括:根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
在本公开的一些实施例中,根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,包括:根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼在所述三维环境中的可视范围;将所述第二虚像位于所述第二目标眼在所述三维环境中的可视范围内的部分确定为所述第二目标虚像。
在本公开的一些实施例中,所述根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,包括:根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
在本公开的一些实施例中,根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域,包括:获取所述第一目标虚像在所述第一显示屏显示的图像中的第一对应区域,以及所述第二目标虚像在所述第二显示屏显示的图像中的第二对应区域;以及将所述第一对应区域确定为所述第一注视区域,并将所述第二对应区域确定为所述第二注 视区域。
本公开的多个实施例提供了一种注视区域确定装置,其适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:
获取模块,配置为获取第一目标眼在所述第一显示屏上的注视点;
第一确定模块,配置为根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境位于所述第一目标眼的可视范围内的区域;
第二确定模块,配置为根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像中,位于所述第一目标眼的可视范围内的虚像;
第三确定模块,配置为将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中,位于所述第二目标眼的可视范围内的区域,所述第二目标眼为左眼和右眼中除所述第一目标眼之外的眼睛;
第四确定模块,配置为根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像中,位于所述第二目标眼的可视范围内的虚像;
第五确定模块,配置为根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
在本公开的一些实施例中,所述第一确定模块配置为:根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
在本公开的一些实施例中,所述第四确定模块配置为:根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼的可视范围;以及将所述第二虚像位于所述第二目标眼的可视范围内的部分确定为所述第二目标虚像。
在本公开的一些实施例中,所述第二确定模块配置为:根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;以及
将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
本公开的多个实施例提供了一种可穿戴设备,包括:图像采集组件、第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜;所述可穿戴设备还包括上述任一注视区域确定装置。
本公开的多个实施例还提供了一种注视区域确定装置,所述注视区域确定装置适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:处理器;存储器,所述存储器配置为存储所述处理器可执行的指令,当所述指令被所述处理器执行时,所述处理器被配置为执行上述任何一种注视区域确定方法。
本公开的多个实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实施上述任何一种注视区域确定方法。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是采用相关技术中的注视区域的确定方法确定出的左眼高清图像和右眼高清图像的示意图;
图2是根据本公开一个实施例的可穿戴设备的结构示意图;
图3是根据本公开一个实施例的人眼透过透镜观看显示屏中的图像的示意图;
图4是本公开实施例提供的一种注视区域的确定方法的方法流程图;
图5是本公开实施例提供的另一种注视区域的确定方法的方法流程图;
图6是本公开实施例提供的一种根据注视点以及第一目标眼的视场角确定 目标虚拟区域的方法流程图;
图7是本公开实施例示出的一种第一目标眼的可视范围的示意图;
图8是本公开实施例提供的一种根据注视点以及第一目标眼的视场角确定第一目标虚像的方法流程图;
图9是本公开实施例提供的一种根据目标虚拟区域以及第二目标眼的位置确定第二目标虚像的方法流程图;
图10是本公开实施例提供的一种确定注视区域的示意图;
图11是本公开实施例提供的一种注视区域的确定装置的框图;
图12是本公开实施例提供的一种可穿戴设备的结构示意图。
具体实施方式
为使本申请的技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了有助于理解本公开的内容,在对本公开实施例进行详细介绍之前,在此先对本公开实施例所涉及的名词进行解释。
VR技术,是一种利用可穿戴设备将人对外界的视觉甚至听觉封闭,以引导用户产生一种身在虚拟的三维环境中的感觉的技术。其显示原理是左右眼对应的显示屏分别显示供左右眼观看的图像,由于人眼存在视差,使得大脑在通过人眼获取到带有差异的图像后产生了接近真实的立体感。VR技术通常由VR系统实现,该VR系统可以包括可穿戴设备以及VR主机,其中,VR主机可以集成于可穿戴设备中,或者是能够与可穿戴设备有线或无线连接的外连设备。该VR主机用于对图像进行渲染并将渲染后的图像发送至可穿戴设备,可穿戴设备用于接收并显示该渲染后的图像。
眼动追踪(英文:Eye Tracking),也称眼球追踪,是一种通过采集人眼的人眼图像,来分析出人眼的眼球运动信息,并基于该眼球运动信息确定出人眼在显示屏上的注视点的技术。进一步的,在眼动追踪技术中,根据确定出的人眼在显示屏上的注视点,可以确定出人眼在显示屏上的注视区域。
SmartView是一个通过将VR技术与Eye Tracking技术相结合,以实现高清VR技术的技术方案。该技术方案包括:首先通过Eye Tracking技术精确追踪用户在显示屏上的注视区域,然后只对该注视区域进行高清渲染,而对其他区域进行非高清渲染,同时集成电路(英文:Integrated Circuit,简称:IC)能够将 渲染的非高清图像(也称低清图像或者低清晰度图像)处理成高分辨率图像,显示在显示屏上。其中,该显示屏可以为液晶显示(英文:Liquid Crystal Display,简称:LCD)屏或者有机发光二极管(英文:Organic Light-Emitting Diode,简称:OLED)显示屏等。
Unity,也称Unity引擎,是由Unity Technologies开发的一个多平台的综合型游戏开发工具,是一个全面整合的专业游戏引擎。Unity可以用于开发VR技术。
需要说明的是,用户是否能够通过屏幕观看到高清图像主要由两方面的因素决定,一方面是屏幕本身的物理分辨率,即屏幕上像素点的个数,目前,市场上出现的主流可穿戴设备屏幕的单眼分辨率为1080*1200;另一方面是待显示图像的清晰度。只有当屏幕的分辨率越高以及待显示图像的清晰度越高时,用户才能够通过屏幕观看到高清图像。其中,清晰度越高意味着VR主机需要对可穿戴设备中用于呈现三维环境的图像进行更精细化的渲染处理。
显然,若想使用户观测到更高清的图像,既需要提高屏幕的分辨率,也需要同时提高图像的清晰度,而提高图像的清晰度明显会增加VR主机的渲染压力以及该VR主机与可穿戴设备之间的图像传输所需的带宽。因此,在解决如何使单眼分辨率为4320*4320甚至更高分辨率的屏幕呈现出更高清的图像这一问题上一直存在瓶颈。而上述smartview技术的引入则一定程度上解决了单眼高清图像在硬件传输和软件渲染方面的瓶颈。该smartview技术结合Eye Tracking技术,既能够保证注视区域的高清需求,又降低了渲染压力和图像传输带宽。
相关技术中,为了确保能够准确地确定出双眼的注视点坐标,Eye Tracking技术需要在可穿戴设备中设置两个相机,该两个相机能够分别采集左眼和右眼的人眼图像(该人眼图像也称为注视点图像等),由VR主机基于该人眼图像进行注视点坐标的计算。
但是,设置在VR系统的可穿戴设备中的两个相机大大增加了可穿戴设备的重量以及成本,不利于该VR系统的普遍推广。
并且,该技术方案并未考虑到人的视觉特点:由于左眼和右眼分别位于空间中的不同位置,使得左眼和右眼观看物体时的视角不同,如此导致同一物体在左眼视野和右眼视野中的位置有所差异,进而导致两眼所看到的图像实际上并不是完全重合的。因此,若根据左眼图像和右眼图像分别计算出左眼注视点坐标和右眼注视点坐标,该左眼注视点坐标和右眼注视点坐标实际上在显示屏 中的位置并不重合,若进一步根据该左右眼的注视点坐标确定左眼注视点区域的右眼注视点区域,导致左眼注视点区域和右眼注视点区域也难以完全重合。
若采用smartview技术对不重合的左右眼的注视点区域分别进行高清渲染,导致生成的左右眼的高清图形也难以完全重合。如图1所示,该图示出了对左右眼的注视点区域分别进行高清渲染后得到的左眼高清图像11和右眼高清图像12。从图1中可以看出,左眼高清图像11和右眼高清图像12仅有中间的部分区域重叠。呈现给用户的视觉感受则是用户在其左右眼的视野范围内均能看到高清图像区域13、高清图像区域14以及高清图像区域15。其中,高清图像区域13为左右眼均能看到的高清图像区域,而高清图像区域14为仅左眼能看到的高清图像区域,高清图像区域15为仅右眼能看到的高清图像区域。由于高清图像区域14和高清图像区域15仅为双眼中的某一只眼睛能够看到的高清图像区域,当用户双眼同时注视显示屏时,会影响用户的观感体验,且高清图像区域13和高清图像区域15之间、以及高清图像区域14和高清图像区域15会呈现出较为明显的交界线,进一步影响了用户的观感体验。
本公开的多个实施例提供了一种注视区域的确定方法,能够保证确定出的左右眼的注视区域重叠,如此使得用户的左右眼可以观看到完全重叠的高清图像,有效提高了用户体验。在对该方法进行说明之前,首先对该方法所应用于的可穿戴设备进行介绍。
本公开的多个实施例提供了一种可穿戴设备。如图2所示,该可穿戴设备20可以包括第一显示组件21以及第二显示组件22,该第一显示组件21包括第一显示屏211以及位于该第一显示屏211出光侧的第一透镜212,该第二显示组件22包括第二显示屏221以及位于该第二显示屏221出光侧的第二透镜222。其中,透镜(即第一透镜212和第二透镜222)用于放大在对应的显示屏(即第一显示屏211和第二显示屏221)上显示的图像,以为用户提供更真实的沉浸感。
以第一显示组件21为例,如图3所示,人眼透过第一透镜212观测到第一显示屏211显示的图像对应的第一虚像213,该第一虚像213通常为第一显示屏211显示的图像放大后的图像。
该可穿戴设备还可以包括图像采集组件,该图像采集组件可以是眼动跟踪摄像头,该眼动跟踪摄像头集成于可穿戴设备的第一显示屏和第二显示屏中的至少一个显示屏周围,用于实时采集与所述至少一个显示屏对应的人眼图像,并将其发送至VR主机,VR主机基于对该人眼图像进行处理以确定该人眼在该 显示屏上的注视点坐标。注视区域的确定装置获取该注视点坐标。
该可穿戴设备还包括注视区域确定装置,该注视区域确定装置可以通过软件或硬件的方式结合于可穿戴设备中,或者结合于VR主机中,该注视区域确定装置可以配置为执行下述注视区域的确定方法。
图4示出了本公开实施例提供的一种注视区域确定方法的流程图,该方法可以包括如下步骤:
步骤S201、获取第一目标眼在第一显示屏上的注视点,第一目标眼为左眼或者右眼。
步骤S202、根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为可穿戴设备呈现出的三维环境的位于第一目标眼的可视范围内的区域。
步骤S203、根据所述注视点以及所述第一目标眼的所述视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过第一透镜所成的第一虚像的位于第一目标眼的可视范围内的部分。
步骤S204、将所述目标虚拟区域确定为可穿戴设备呈现出的三维环境的位于第二目标眼的可视范围内的区域,所述第二目标眼为所述左眼和所述右眼中除所述第一目标眼之外的眼睛。
步骤S205、根据所述目标虚拟区域以及所述第二目标眼的位置,确定第二目标虚像,第二目标虚像为第二显示屏显示的图像通过第二透镜所成的第二虚像的位于所述第二目标眼的可视范围内的部分。
步骤S206、根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
综上所述,在本公开实施例提供的注视区域确定方法中,通过所述第一目标眼在所述第一显示屏上的注视点以及所述第一目标眼的所述视场角确定所述目标虚拟区域,并将所述目标虚拟区域确定为所述第二目标眼的可视范围内的区域,以此来确定所述第二目标眼的可视范围,进而可以确定出所述第一目标眼看到的所述第一虚像以及所述第二目标眼看到的所述第二虚像,由此能够确定出所述第一目标眼在所述第一显示屏显示的所述图像中的所述第一注视区域以及所述第二目标眼在所述第二显示屏显示的所述图像中的所述第二注视区域。由于所述第一目标眼在所述第一显示屏上的所述第一注视区域和所述第二 目标眼在所述第二显示屏上的所述第二注视区域由同一目标虚拟区域确定,因而第一注视区域和第二注视区域可以完全重合,有效提高了可穿戴设备中图像的显示效果,提升了用户的观感体验。
图5示出了根据本公开的另一个实施例的注视区域确定方法的流程图,所述注视区域确定方法可由注视区域确定装置执行,适用于于可穿戴设备,该可穿戴设备的结构可以参考上述图2所示的可穿戴设备。所述注视区域确定方法可以包括如下步骤:
步骤S301、获取第一目标眼在第一显示屏上的注视点。
在本公开实施例中,可穿戴设备的所述第一显示屏的周围可以设置有眼动跟踪摄像头,所述眼动跟踪摄像头可以实时采集其对应的第一目标眼的人眼图像,VR主机根据所述人眼图像确定所述第一目标眼在所述第一显示屏上的注视点坐标。所述注视区域确定装置获取该注视点坐标。
步骤S302、根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为可穿戴设备呈现出的三维环境的位于所述第一目标眼的可视范围内的区域。
在本公开的一些实施例中,如图6所示,根据所述注视点以及所述第一目标眼的视场角确定所述目标虚拟区域可以包括:
步骤S3021、根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围。
所述第一目标眼的视场角可以由水平视场角和垂直视场角组成,位于第一目标眼的水平视场角和垂直视场角内的区域为第一目标眼的可视范围。人眼实际能够达到的视场角是有限的,一般而言,人眼的水平视场角最大为188度,垂直视场角最大为150度。通常情况下,不管人眼如何转动,人眼的视场角是保持不变的,根据所述第一目标眼的注视点以及第一目标眼的水平视场角和垂直视场角,则可以确定所述第一目标眼的可视范围。当然,左眼和右眼的视场角可能存在差异,考虑到个体差异,不同人的视场角也可能不同,本公开实施例对此不做限制。
图7示意性地示出了第一目标眼O的注视点G、第一目标眼的水平视场角a、垂直视场角b以及第一目标眼的可视范围(即点O、点P、点Q、点M和点N所围成的空间区域)的示意图。
步骤S3022、将三维环境中位于所述第一目标眼的可视范围内的区域确定为 目标虚拟区域。
为了给用户提供良好的沉浸感,在本公开的一些实施例中,可穿戴设备中呈现出的三维环境的场景范围通常大于人眼的可视范围。因此,本公开实施例在实际应用中,将三维环境中位于第一目标眼的可视范围内的区域确定为目标虚拟区域。当然,若可穿戴设备中呈现出的三维环境的场景范围小于人眼的可视范围,则将三维环境的场景范围的位于第一目标眼的可视范围内的区域确定为目标虚拟区域。
在本公开的一些实施例中,步骤S3022中,确定目标虚拟区域可以包括如下步骤:
步骤A1、从第一目标眼的位置(为便于说明,将第一目标眼的位置视为一个点)处发出至少两条射线,所述至少两条射线分别沿第一目标眼的视场角的边界射出。
Unity引擎可以从第一目标眼的位置处发出至少两条射线(该射线为虚拟射线),也即是,以第一目标眼的位置处为起点绘制至少两条射线,该至少两条射线可以分别沿第一目标眼的视场角的边界射出。
在根据本公开实施例的可穿戴设备中,在第一目标眼的位置处和第二目标眼的位置处分别设置有第一虚拟摄像机以及第二虚拟摄像机。用户左右眼通过可穿戴设备中的第一显示屏和第二显示屏看到的画面分别来自该第一虚拟摄像机和第二虚拟摄像机拍摄的画面。
由于第一目标眼的位置即为可穿戴设备中第一虚拟摄像机的位置,因此,在本公开实施例中,可以通过虚拟摄像机的位置来表征目标眼的位置,则Unity引擎可以从第一虚拟摄像机的位置处发出至少两条射线。
步骤A2、获取至少两条射线与虚拟区域发生接触的至少两个点,将所述至少两个点分别作为标定点。
在该至少两条射线的延伸方向上,该至少两条射线会与可穿戴设备呈现出的三维环境即虚拟区域接触产生接触点。在Unity引擎中,具有物理属性的射线与虚拟物体表面的碰撞器发生碰撞时,Unity引擎可确定出碰撞点的坐标,即虚拟物体表面的坐标。
步骤A3、将至少两个标定点在虚拟区域中所围成的区域确定为目标虚拟区域。
在本公开实施例中,可以预先确定目标虚像区域的几何图形,按照该几何 图形将该至少两个标定点进行连线,将该连线所围成的区域确定为所述目标虚拟区域。如果只有两个标定点,如果这两个标定点在虚拟物体表面上的两个坐标差异均比较大,则可以确定这两个标定点分别是接近类似于图7中所示的OQ和ON、或者OP和OM的射线与虚拟区域发生碰撞时所产生的接触点,则以两个标定点的连线所在的非旋转矩形所围的区域作为所述目标虚拟区域;如果这两个标定点在虚拟物体表面上的两个坐标仅有一个坐标的差异比较大,另一个坐标很接近,可以确定产生这两个标定点的两条射线基本上处于同一个界面上,例如图7所示的OPQ、OQM、OMN或者OPN,则可以根据所述两个标定点的坐标,根据水平视场角和垂直视场角,确定所述目标虚拟区域。
当然,在本公开实施例中,也可以对该连线所围成的区域进一步进行物体识别,以提取出该所围成的区域中的有效物体,而忽略该所围成区域中的无效物体(例如天空等背景),将该有效物体所在的区域确定为所述目标虚拟区域。
步骤S303、根据注视点以及第一目标眼的视场角确定第一目标虚像,第一目标虚像为第一显示屏显示的图像通过第一透镜所成的第一虚像的位于第一目标眼的可视范围内的部分。
左右眼分别通过第一透镜和第二透镜看到了第一虚像和第二虚像,该第一虚像和第二虚像同时呈现于左右眼面前时,双眼同时分别获取该第一虚像和第二虚像,大脑中形成了具有深度的三维图像。在本公开实施例中,为了确定该第一目标虚像,需要重新将第一目标眼看到的第一虚像和第二目标眼看到的第二虚像重新标识。
当然,为了不影响可穿戴设备中图像的显示效果,该重新标识出的第一虚像和第二虚像可以是透明的。
在本公开的一些实施例中,如图8所示,根据注视点以及第一目标眼的视场角确定第一目标虚像可以包括:
步骤S3031、根据第一目标眼的位置、注视点以及第一目标眼的视场角确定第一目标眼的可视范围。
步骤S3031的实施可以参考上述步骤S3021的相关描述,本公开实施例在此不再赘述。
步骤S3032、将第一虚像位于第一目标眼的可视范围内的部分确定为第一目标虚像。
在本公开的一些实施例中,步骤S3032中,确定第一目标虚像可以包括如 下步骤:
步骤B1、从第一目标眼的位置发出至少两条射线,至少两条射线分别沿第一目标眼的视场角的边界射出。
步骤B1可以参考上述步骤A1的相关描述,本公开实施例在此不再赘述。
步骤B1是为了将第一目标眼的可视范围通过射线的方式表征出来,以便准确确定出位于第一虚像中的第一目标虚像。
步骤B2、分别获取至少两条射线与第一虚像接触的至少两个第一接触点。
在该至少两条射线的延伸方向上,该至少两条射线会与第一虚像接触而形成至少两个第一接触点。
步骤B3、将该至少两个第一接触点所围成的区域确定为第一目标虚像。
在本公开的一些实施例中,与上述步骤A3类似,可以根据预先确定的几何图形确定第一目标虚像,也可以对该围成的区域进行物体识别,将识别出的物体确定为第一目标虚像。
步骤S304、将目标虚拟区域确定为可穿戴设备呈现出的三维环境中位于第二目标眼的视场角的范围内的区域。
通过将根据所述第一目标眼的所述注视点以及所述第一目标眼的所述视场角确定的所述目标虚拟区域确定为所述第二目标眼的视场角的范围内的区域,可以保证确定出的双眼的注视区域重合。
步骤S305、根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,第二目标虚像为第二显示屏显示的图像通过第二透镜所成的第二虚像位于第二目标眼的视场角的范围内的部分。
在本公开的一些实施例中,如图9所示,根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像可以包括:
步骤S3051、根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标眼在该三维环境中的可视范围。
从所述第二目标眼的位置发出至少两条射线,至少两条射线分别与围成目标虚拟区域的至少两个标定点连接。则所述第二目标眼的位置以及该至少两个标定点所围成的空间区域为第二目标眼在该三维环境中的可视范围。该第二目标眼在该三维环境中的可视范围为第二目标眼的可视范围内的部分空间区域。
与上述步骤S3022类似,Unity引擎可以控制第二目标眼的位置发出至少两条射线,也即是,以第二目标眼的位置为起点,以至少两个标定点为终点,绘 制至少两条射线。第二目标眼的位置为可穿戴设备中第二虚拟摄像机的位置。
步骤S3052、将所述第二虚像位于第二目标眼在该三维环境中的可视范围内的部分确定为第二目标虚像。
在本公开的一些实施例中,步骤S3052中确定所述第二目标虚像可以包括如下步骤:
步骤C1、分别获取至少两条射线与第二虚像接触的至少两个第二接触点。在该至少两条射线的延伸方向上,该至少两条射线会与第二虚线接触而形成至少两个第二接触点。
步骤C2、将该该至少两个第一接触点所围成的区域确定为第二目标虚像。
在本公开的一些实施例中,与上述步骤A3类似,可以根据预先确定的几何图形确定第二目标虚像。如果在步骤B3中对所述至少两个第一接触点所围成的区域进行物体识别,将识别出的物体确定为第一目标虚像,那么,在步骤C2中,同样对该至少两个第一接触点所围成的区域进行物体识别,将识别出的物体确定为第二目标虚像。
需要说明的是,为了保证左右眼所观测到的物体的一致性,在步骤B3以及步骤C2中,对围成的区域进行物体识别时应该采取相同的算法以及相同的算法参数,以确保识别出的物体一致。
步骤S306、获取所述第一目标虚像在所述第一显示屏显示的图像中的第一对应区域,以及所述第二目标虚像在所述第二显示屏显示的图像中的第二对应区域。
分别将至少两个第一接触点以及至少两个第二接触点转化为第一显示屏所显示的图像中的至少两个第一图像点,以及第二显示屏所显示的图像中的至少两个第二图像点。
由于透镜的物理特性,用户透过透镜观看目标图像时,也即是用户在观看目标图像所呈的目标虚像时,该目标虚像相较于目标图像产生了畸变,为了避免用户看到畸变的目标图像,需要预先对该目标图像采用反畸变网格的方式进行反畸变处理。该反畸变网格中记录有虚像坐标与图像坐标的对应关系。
在本公开实施例中,该至少两个第一接触点和该至少两个第二接触点均为位于虚像中的虚像坐标,该至少两个第一图像点和至少两个第二图像点均为屏幕中显示的图像中的图像坐标(屏幕的坐标与屏幕中显示的图像坐标对应),因此,基于反畸变网格中虚像坐标与图像坐标的对应关系,可以将该至少两个第 一接触点转化为第一显示屏所显示的图像中的至少两个第一图像点以及将至少两个第二接触点转化为第二显示屏所显示的图像中的至少两个第二图像点。
根据至少两个第一图像点确定第一对应区域,可选的,将该至少两个第一图像点所围成的区域确定为第一对应区域,或者,可以对该围成的区域进行物体识别,将识别出的物体确定为第一对应区域;根据至少两个第二图像点确定第二对应区域,可选的,将该至少两个第二图像点所围成的区域确定为第二对应区域,或者,可以对该围成的区域进行物体识别,将识别出的物体确定为第二对应区域。
步骤S307、将第一对应区域确定为第一注视区域。
步骤S308、将第二对应区域确定为第二注视区域。
综上所述,在根据本公开实施例的注视区域确定方法中,通过所述第一目标眼在所述第一显示屏上的所述注视点以及所述第一目标眼的视场角确定所述目标虚拟区域,并将所述目标虚拟区域确定为第二目标眼的可视范围内的区域,以此来确定第二目标眼的可视范围,进而可以确定出所述第一目标眼看到的所述第一虚像以及所述第二目标眼看到的所述第二虚像,由此能够确定出所述第一目标眼在所述第一显示屏显示的图像中的所述第一注视区域以及所述第二目标眼在所述第二显示屏显示的图像中的所述第二注视区域。由于所述第一目标眼在所述第一显示屏上的所述第一注视区域和所述第二目标眼在所述第二显示屏上的所述第二注视区域由同一个目标虚拟区域确定,因而所述第一注视区域和所述第二注视区域可以完全重合,解决了相关技术中左右眼注视区域难以完全重合而导致可穿戴设备中图像的显示效果较差的问题,有效提高了可穿戴设备中图像的显示效果,提升了用户的观感体验。
进一步的,在根据本公开实施例的注视区域的确定方法中,在步骤301中,,可以通过一个眼动跟踪摄像头获取第一目标眼在显示屏上的注视点,因此,应用本公开实施例所提供的注视区域的确定方法的可穿戴设备中,可以仅设置一个眼动跟踪摄像头,相较于相关技术中,需要给左右眼分别设置眼动跟踪摄像头,通过分别采集左右眼的人眼图像,分析左右眼的注视点,以确定出左右眼的注视区域,本公开实施例所提供的注视区域的确定方法可以有效降低可穿戴装置的重量以及成本,有利于可穿戴设备的普及推广。
需要说明的是,上述步骤的顺序可以根据实际需要进行调整,例如,步骤307和步骤308可以同时执行或者先执行步骤S308再执行步骤S307,再例如, 步骤S303和步骤S304可以同时执行或者先执行步骤304再执行步骤303。
以下结合图10对上述实施例进行进一步说明。以第一目标眼为左眼为例,注视区域的确定方法包括以下步骤:
步骤S1、获取第一目标眼213在第一显示屏211上的注视点S。
步骤S2、根据注视点S确定第一目标眼213的视场角α。
在本实施例中,以视场角为水平视场角为例进行说明。
步骤S3、从第一目标眼213的位置所在点沿第一目标眼213的视场角α的边界发出两条射线,获取该两条射线与虚拟区域23发生接触的两个接触点,将所述两个接触点确定为第一标定点S1和第二标定点S2,将所述第一标定点和所述第二标定点在虚拟区域23中所围成的区域确定为目标虚拟区域。
在本实施例中,以标定点S1和标定点S2的连线之间的区域表示目标虚拟区域为例进行说明。
步骤S4、分别获取从第一目标眼213的位置所在点发出的两条射线与第一虚像214接触的第一接触点C’和第二接触点A’,根据第一接触点C’和第二接触点A’确定第一目标虚像。
在本实施例中,以第一接触点C’和第二接触点A’的连线之间的区域表示第一目标虚像为例进行说明。
步骤S5、将所述目标虚拟区域确定为可穿戴设备呈现出的三维环境位于第二目标眼223的视场角β的视场范围内的区域。
步骤S6、从第二目标眼223的位置所在点发出两条射线,该两条射线分别与围成目标虚拟区域的标定点S1和标定点S2分别连接,分别获取该两条射线与第二虚像224接触的第三接触点D’和第四接触点B’,根据第三接触点D’和第四接触点B’确定第二目标虚像。
在本实施例中,以第三接触点D’和第四接触点B’的连线之间的区域表示第二目标虚像为例进行说明。
步骤S7、将第一接触点C’转换为第一显示屏所显示的图像中的第一图像点C,将第二接触点A’转换为第一显示屏所显示的图像中的第二图像点A,将第三接触点D’转换为第二显示屏所显示的图像中的第三图像点D,将第四接触点B’转换为第二显示屏所显示的图像中的第四图像点B,根据第一图像点C和第二图像点A确定第一注视区域,根据第三图像点D和第四图像点B,确定第二注视区域。
需要说明的是,本公开实施例在实际实现时,第一虚像214和第二虚像224是重叠的,但是为了方便对注视区域的确定方法进行说明,在图9中将第一虚像214和第二虚像224示出为不重叠。另外,对于用于表示目标虚拟区域的标定点S1和标定点S2,以及注视点S等均为示意性说明。
图11示出了根据本公开实施例的一种注视区域确定装置30,该注视区域确定装置30可以应用于如图2所示的可穿戴设备,该注视区域的确定装置30包括:
获取模块301,配置为获取第一目标眼在所述第一显示屏上的注视点,所述第一目标眼为左眼或者右眼;
第一确定模块302,配置为根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境位于所述第一目标眼的可视范围内的区域;
第二确定模块303,配置为根据所述注视点以及所述第一目标眼的视场角确定所述第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像位于所述第一目标眼的可视范围内的部分;
第三确定模块304,配置为将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境位于所述第二目标眼的可视范围内的区域;
第四确定模块305,配置为根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像位于所述第二目标眼的可视范围内的部分;
第五确定模块306,配置为根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
综上所述,通过第一目标眼在第一显示屏上的注视点以及第一目标眼的视场角确定目标虚拟区域,并将该目标虚拟区域确定为第二目标眼的可视范围内的区域,以此来确定第二目标眼的可视范围,进而可以确定出第一目标眼看到的第一虚像以及第二目标眼看到的第二虚像,由此能够确定出第一目标眼在第一显示屏显示的图像中的第一注视区域以及第二目标眼在第二显示屏显示的图像中的第二注视区域。由于两个目标眼在显示屏上的注视区域由同一个目标虚拟区域确定,因而第一注视区域和第二注视区域可以准确重合,解决了相关技术中左右眼注视区域难以完全重合而导致可穿戴设备中图像的显示效果较差的 问题,有效提高了可穿戴设备中图像的显示效果,提升了用户的观感体验。
在本公开的一些实施例中,第一确定模块302,配置为:
根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;
将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
在本公开的一些实施例中,第四确定模块305,配置为:
根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼的可视范围;
将所述第二虚像位于所述第二目标眼的可视范围内的部分确定为所述第二目标虚像。
在本公开的一些实施例中,第二确定模块303,配置为:
根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;
将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
综上所述,通过第一目标眼在第一显示屏上的注视点以及第一目标眼的视场角确定目标虚拟区域,并将该目标虚拟区域确定为第二目标眼的可视范围内的区域,以此来确定第二目标眼的可视范围,进而可以确定出第一目标眼看到的第一虚像以及第二目标眼看到的第二虚像,由此能够确定出第一目标眼在第一显示屏显示的图像中的第一注视区域以及第二目标眼在第二显示屏显示的图像中的第二注视区域。由于两个目标眼在显示屏上的注视区域由同一个目标虚拟区域确定,因而第一注视区域和第二注视区域可以准确重合,解决了相关技术中左右眼注视区域难以完全重合而导致可穿戴设备中图像的显示效果较差的问题,有效提高了可穿戴设备中图像的显示效果,提升了用户的观感体验。
图12示出了根据本公开另一个实施例的可穿戴设备20的结构示意图,该可穿戴设备20包括注视区域的确定装置24、图像采集组件23、第一显示组件21以及第二显示组件22。
注视区域的确定装置24可以为图10所示的注视区域的确定装置30,图像采集组件23、第一显示组件21以及第二显示组件22可以参考前述介绍,本公开实施例在此不再赘述。
本公开的至少一个实施例还提供了一种注视区域确定装置,所述注视区域确定装置适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:处理器;存储器,所述存储器配置为存储所述处理器可执行的指令,当所述指令被所述处理器执行时,所述处理器被配置为:
获取第一目标眼在所述第一显示屏上的注视点;
根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境中位于所述第一目标眼的可视范围内的区域;
根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像位于所述第一目标眼的可视范围内的部分;
将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中位于所述第二目标眼的可视范围内的区域;
根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像位于所述第二目标眼的可视范围内的部分;
根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
在本公开的一些实施例中,当所述处理器被配置为根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域时,所述处理器配置为:
根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;以及将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
在本公开的一些实施例中,当所述处理器配置为根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像时,所述处理器配置为:根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼在所述三维环境中的可视范围;以及将所述第二虚像位于所述第二目标眼在所述三维环境中的 可视范围内的部分确定为所述第二目标虚像。
在本公开的一些实施例中,当所述处理器配置根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像时,所述处理器配置为
根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;以及将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
在本公开的一些实施例中,当所述处理器配置为根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域时,所述处理器配置为:获取所述第一目标虚像在所述第一显示屏显示的图像中的第一对应区域,以及所述第二目标虚像在所述第二显示屏显示的图像中的第二对应区域;将所述第一对应区域确定为所述第一注视区域;以及将所述第二对应区域确定为所述第二注视区域。
本公开的至少一个实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实施上述任何一种方法。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本公开中,术语“第一”、“第二”、“第三”和“第四”仅用于描述目的,而不能理解为指示或暗示相对重要性。术语“多个”指两个或两个以上,除非另有明确的限定。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本公开的较佳实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (12)

  1. 一种注视区域确定方法,适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述方法包括:
    获取第一目标眼在所述第一显示屏上的注视点;
    根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境中位于所述第一目标眼的可视范围内的区域;
    根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像位于所述第一目标眼的可视范围内的部分;
    将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中位于所述第二目标眼的可视范围内的区域;
    根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像位于所述第二目标眼的可视范围内的部分;
    根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
  2. 根据权利要求1所述的方法,其中,所述根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,包括:
    根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;
    将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
  3. 根据权利要求1或2所述的方法,其中,所述根据所述目标虚拟区域以 及所述第二目标眼的位置确定第二目标虚像,包括:
    根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼在所述三维环境中的可视范围;
    将所述第二虚像位于所述第二目标眼在所述三维环境中的可视范围内的部分确定为所述第二目标虚像。
  4. 根据权利要求1所述的方法,其中,所述根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,包括:
    根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;
    将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
  5. 根据权利要求4所述的方法,其中,所述根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域,包括:
    获取所述第一目标虚像在所述第一显示屏显示的图像中的第一对应区域,以及所述第二目标虚像在所述第二显示屏显示的图像中的第二对应区域;以及
    将所述第一对应区域确定为所述第一注视区域,并将所述第二对应区域确定为所述第二注视区域。
  6. 一种注视区域确定装置,其中,所述注视区域确定装置适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:
    获取模块,配置为获取第一目标眼在所述第一显示屏上的注视点;
    第一确定模块,配置为根据所述注视点以及所述第一目标眼的视场角确定目标虚拟区域,所述目标虚拟区域为所述可穿戴设备呈现出的三维环境位于所 述第一目标眼的可视范围内的区域;
    第二确定模块,配置为根据所述注视点以及所述第一目标眼的视场角确定第一目标虚像,所述第一目标虚像为所述第一显示屏显示的图像通过所述第一透镜所成的第一虚像中,位于所述第一目标眼的可视范围内的虚像;
    第三确定模块,配置为将所述目标虚拟区域确定为所述可穿戴设备呈现出的三维环境中,位于所述第二目标眼的可视范围内的区域,所述第二目标眼为左眼和右眼中除所述第一目标眼之外的眼睛;
    第四确定模块,配置为根据所述目标虚拟区域以及所述第二目标眼的位置确定第二目标虚像,所述第二目标虚像为所述第二显示屏显示的图像通过所述第二透镜所成的第二虚像中,位于所述第二目标眼的可视范围内的虚像;
    第五确定模块,配置为根据所述第一目标虚像和所述第二目标虚像,确定所述第一目标眼在所述第一显示屏显示的图像中的第一注视区域,以及所述第二目标眼在所述第二显示屏显示的图像中的第二注视区域。
  7. 根据权利要求6所述的注视区域确定装置,其中,所述第一确定模块配置为:
    根据所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;
    将所述三维环境中位于所述第一目标眼的可视范围内的区域确定为所述目标虚拟区域。
  8. 根据权利要求6或7所述的注视区域确定装置,其中,所述第四确定模块配置为:
    根据所述目标虚拟区域以及所述第二目标眼的位置确定所述第二目标眼的可视范围;以及
    将所述第二虚像位于所述第二目标眼的可视范围内的部分确定为所述第二目标虚像。
  9. 根据权利要求6至8中任何一项所述的注视区域确定装置,其中,所述第二确定模块配置为:
    根据所述第一目标眼的位置、所述注视点以及所述第一目标眼的视场角确定所述第一目标眼的可视范围;以及
    将所述第一虚像位于所述第一目标眼的可视范围内的部分确定为所述第一目标虚像。
  10. 一种可穿戴设备,包括:图像采集组件、第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜;
    所述可穿戴设备还包括权利要求6至9中任何一项所述的注视区域确定装置。
  11. 一种注视区域确定装置,所述注视区域确定装置适用于可穿戴设备,所述可穿戴设备包括第一显示组件以及第二显示组件,所述第一显示组件包括第一显示屏以及位于所述第一显示屏出光侧的第一透镜,所述第二显示组件包括第二显示屏以及位于所述第二显示屏出光侧的第二透镜,所述注视区域确定装置包括:处理器;存储器,所述存储器配置为存储所述处理器可执行的指令,当所述指令被所述处理器执行时,所述处理器被配置为执行权利要求1至5中任何一项所述的方法。
  12. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实施权利要求1至5中任何一项所述的方法。
PCT/CN2020/080961 2019-04-24 2020-03-24 注视区域的确定方法、装置及可穿戴设备 WO2020215960A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910333506.7 2019-04-24
CN201910333506.7A CN109901290B (zh) 2019-04-24 2019-04-24 注视区域的确定方法、装置及可穿戴设备

Publications (1)

Publication Number Publication Date
WO2020215960A1 true WO2020215960A1 (zh) 2020-10-29

Family

ID=66956250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080961 WO2020215960A1 (zh) 2019-04-24 2020-03-24 注视区域的确定方法、装置及可穿戴设备

Country Status (2)

Country Link
CN (1) CN109901290B (zh)
WO (1) WO2020215960A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160933A1 (en) * 2021-01-26 2022-08-04 Huawei Technologies Co.,Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109901290B (zh) * 2019-04-24 2021-05-14 京东方科技集团股份有限公司 注视区域的确定方法、装置及可穿戴设备
CN110347265A (zh) * 2019-07-22 2019-10-18 北京七鑫易维科技有限公司 渲染图像的方法及装置
CN113467619B (zh) * 2021-07-21 2023-07-14 腾讯科技(深圳)有限公司 画面显示方法、装置和存储介质及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105432078A (zh) * 2013-02-19 2016-03-23 瑞尔D股份有限公司 双目注视成像方法和设备
JP2017107359A (ja) * 2015-12-09 2017-06-15 Kddi株式会社 眼鏡状の光学シースルー型の両眼のディスプレイにオブジェクトを表示する画像表示装置、プログラム及び方法
US20170358136A1 (en) * 2016-06-10 2017-12-14 Oculus Vr, Llc Focus adjusting virtual reality headset
CN107797280A (zh) * 2016-08-31 2018-03-13 乐金显示有限公司 个人沉浸式显示装置及其驱动方法
CN108369744A (zh) * 2018-02-12 2018-08-03 香港应用科技研究院有限公司 通过双目单应性映射的3d注视点检测
CN109087260A (zh) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 一种图像处理方法及装置
CN109901290A (zh) * 2019-04-24 2019-06-18 京东方科技集团股份有限公司 注视区域的确定方法、装置及可穿戴设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9398229B2 (en) * 2012-06-18 2016-07-19 Microsoft Technology Licensing, Llc Selective illumination of a region within a field of view
US9766459B2 (en) * 2014-04-25 2017-09-19 Microsoft Technology Licensing, Llc Display devices with dimming panels
CN105425399B (zh) * 2016-01-15 2017-11-28 中意工业设计(湖南)有限责任公司 一种根据人眼视觉特点的头戴设备用户界面呈现方法
US20190018485A1 (en) * 2017-07-17 2019-01-17 Thalmic Labs Inc. Dynamic calibration systems and methods for wearable heads-up displays
CN109031667B (zh) * 2018-09-01 2020-11-03 哈尔滨工程大学 一种虚拟现实眼镜图像显示区域横向边界定位方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105432078A (zh) * 2013-02-19 2016-03-23 瑞尔D股份有限公司 双目注视成像方法和设备
JP2017107359A (ja) * 2015-12-09 2017-06-15 Kddi株式会社 眼鏡状の光学シースルー型の両眼のディスプレイにオブジェクトを表示する画像表示装置、プログラム及び方法
US20170358136A1 (en) * 2016-06-10 2017-12-14 Oculus Vr, Llc Focus adjusting virtual reality headset
CN107797280A (zh) * 2016-08-31 2018-03-13 乐金显示有限公司 个人沉浸式显示装置及其驱动方法
CN108369744A (zh) * 2018-02-12 2018-08-03 香港应用科技研究院有限公司 通过双目单应性映射的3d注视点检测
CN109087260A (zh) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 一种图像处理方法及装置
CN109901290A (zh) * 2019-04-24 2019-06-18 京东方科技集团股份有限公司 注视区域的确定方法、装置及可穿戴设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160933A1 (en) * 2021-01-26 2022-08-04 Huawei Technologies Co.,Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions
US11474598B2 (en) 2021-01-26 2022-10-18 Huawei Technologies Co., Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions

Also Published As

Publication number Publication date
CN109901290B (zh) 2021-05-14
CN109901290A (zh) 2019-06-18

Similar Documents

Publication Publication Date Title
JP6759371B2 (ja) 3dプレノプティックビデオ画像を作成するためのシステムおよび方法
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
WO2020215960A1 (zh) 注视区域的确定方法、装置及可穿戴设备
CN109074681B (zh) 信息处理装置、信息处理方法和程序
CN107376349B (zh) 被遮挡的虚拟图像显示
JP6860488B2 (ja) 複合現実システム
CN106327584B (zh) 一种用于虚拟现实设备的图像处理方法及装置
US10715791B2 (en) Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes
US9076033B1 (en) Hand-triggered head-mounted photography
WO2016091030A1 (zh) 透过式增强现实近眼显示器
WO2018076202A1 (zh) 能够进行人眼追踪的头戴式可视设备及人眼追踪方法
US20170076475A1 (en) Display Control Method and Display Control Apparatus
US9123171B1 (en) Enhancing the coupled zone of a stereoscopic display
KR101788452B1 (ko) 시선 인식을 이용하는 콘텐츠 재생 장치 및 방법
WO2019041614A1 (zh) 沉浸式虚拟现实头戴显示装置和沉浸式虚拟现实显示方法
US11749141B2 (en) Information processing apparatus, information processing method, and recording medium
CN103517060A (zh) 一种终端设备的显示控制方法及装置
JP2023515205A (ja) 表示方法、装置、端末機器及びコンピュータプログラム
CN107065164B (zh) 图像展示方法及装置
CN114371779B (zh) 一种视线深度引导的视觉增强方法
JP2018088604A (ja) 画像表示装置、画像表示方法、システム
US10083675B2 (en) Display control method and display control apparatus
CN114581514A (zh) 一种双眼注视点的确定方法和电子设备
US20230214011A1 (en) Method and system for determining a current gaze direction
US20230239456A1 (en) Display system with machine learning (ml) based stereoscopic view synthesis over a wide field of view

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20796260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20796260

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20796260

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20796260

Country of ref document: EP

Kind code of ref document: A1