CN109901290A - The determination method, apparatus and wearable device of watching area - Google Patents

The determination method, apparatus and wearable device of watching area Download PDF

Info

Publication number
CN109901290A
CN109901290A CN201910333506.7A CN201910333506A CN109901290A CN 109901290 A CN109901290 A CN 109901290A CN 201910333506 A CN201910333506 A CN 201910333506A CN 109901290 A CN109901290 A CN 109901290A
Authority
CN
China
Prior art keywords
eye
image
target
display screen
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910333506.7A
Other languages
Chinese (zh)
Other versions
CN109901290B (en
Inventor
李文宇
苗京花
孙玉坤
王雪丰
彭金豹
李治富
赵斌
李茜
范清文
索健文
刘亚丽
栗可
陈丽莉
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN201910333506.7A priority Critical patent/CN109901290B/en
Publication of CN109901290A publication Critical patent/CN109901290A/en
Priority to PCT/CN2020/080961 priority patent/WO2020215960A1/en
Application granted granted Critical
Publication of CN109901290B publication Critical patent/CN109901290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses the determination method, apparatus and wearable device of a kind of watching area, belong to application of electronic technology field.By first object eye, currently the field angle of the blinkpunkt on the first display screen and first object eye determines destination virtual section to the method, and the destination virtual region is determined as to the region in the range of the field angle of the second target eye, the field angle of the second target eye is determined with this, and then it can determine first virtual image that first object eye is currently seen and second virtual image that the second target eye is currently seen, it so can determine that out first watching area and second target eye second watching area in image that second display screen show of the first object eye in the image that the first display screen is shown.So that the first watching area and the second watching area can be accurately overlapped, solves the problems, such as that the display effect of image in the wearable device of the relevant technologies is poor, effectively increase the display effect of image in wearable device.

Description

The determination method, apparatus and wearable device of watching area
Technical field
The present invention relates to application of electronic technology field, in particular to the determination method, apparatus of a kind of watching area and can wear Wear equipment.
Background technique
Virtual reality (English: Virtual Reality;Referred to as: VR) technology is technology much favored by the market in recent years. VR technology can construct a three-dimensional environment (i.e. virtual scene), provide feeling of immersion by the three-dimensional environment for user.
Currently, user is higher and higher to the clarity requirement of the image of the three-dimensional environment for rendering, in order to avoid high definition The transmission pressure of image will targetedly can be used using the wearable device of VR technology in image shown by its display screen The parts of images that family is watched attentively is rendered as high-definition image, and other parts image is rendered as non-high-definition image.It mentions in the related technology A kind of determination method for having supplied watching area, is determined for the parts of images that user watches attentively, according to a user left side in this method The blinkpunkt information of eye and the blinkpunkt information of right eye determine the left eye watching area and right eye field of regard of the user respectively Domain.
But since same object has differences in the position of the field range of right and left eyes, cause to distinguish in the related technology Determining left eye watching area and right eye watching area is difficult to be completely coincident, and then leads to the watching area based on the right and left eyes point Not Que Ding the high-definition images of right and left eyes be also difficult to be completely coincident, influence the display effect of image in wearable device.
Summary of the invention
The embodiment of the invention provides the determination method, apparatus and wearable device of a kind of watching area, can solve phase The poor technical problem of image display effect in wearable device present in the technology of pass.The technical solution is as follows:
In a first aspect, provide a kind of determination method of watching area, the method is applied to wearable device, it is described can Wearable device includes the first display component and the second display component, and first display component includes the first display screen and position In the first lens of the first display screen light emission side, second display component includes second display screen and is located at described the Second lens of two display screen light emission sides, which comprises
Obtain the current blinkpunkt on first display screen of first object eye, the first object eye be left eye or Right eye;
Destination virtual region, the destination virtual are determined according to the field angle of the blinkpunkt and the first object eye Region is to be currently located at the area in the visual range of the first object eye in the three-dimensional environment that the wearable device shows Domain;
The first object virtual image, the first object are determined according to the field angle of the blinkpunkt and the first object eye The virtual image is the image that show of presently described first display screen by first virtual image formed by first lens, positioned at described the The virtual image in the visual range of one target eye;
The destination virtual region is determined as in the three-dimensional environment that the wearable device shows, is currently located at described Region in the visual range of second target eye, the second target eye be in left eye and right eye in addition to the first object eye Eyes;
The second target virtual image is determined according to the position in the destination virtual region and the second target eye, described second The target virtual image is that the image that currently shows of the second display screen passes through in second virtual image formed by second lens, positioned at institute State the virtual image in the visual range of the second target eye;
According to the first object virtual image and the second target virtual image, determine that the first object eye is aobvious described first The first watching area and the second target eye in the image that display screen is shown is in the image that the second display screen is shown The second watching area.
Optionally, the field angle according to the blinkpunkt and the first object eye determines destination virtual region, Include:
Current visual of the first object eye is determined according to the field angle of the blinkpunkt and the first object eye Range;
The region being located in the current visual range of the first object eye in the three-dimensional environment is determined as the mesh Mark virtual region.
Optionally, the position according to the destination virtual region and the second target eye determines the second target void Picture, comprising:
Determine that the second target eye is current according to the position in the destination virtual region and the second target eye Visual range;
Second virtual image is located at the part in the current visual range of the second target eye and is determined as described second The target virtual image.
Optionally, the field angle according to the blinkpunkt and the first object eye determines the first object virtual image, Include:
According to the determination of the field angle of the position of the first object eye, the blinkpunkt and the first object eye The current visual range of first object eye;
First virtual image is located at the part in the current visual range of the first object eye and is determined as described first The target virtual image.
Optionally, described according to the first object virtual image and the second target virtual image, determine the first object eye The first watching area and the second target eye in the image that first display screen is shown is in the second display screen The second watching area in the image of display, comprising:
Obtain first corresponding region of the first object virtual image in the image that first display screen is shown, Yi Jisuo State second corresponding region of the second target virtual image in the image that the second display screen is shown;
First corresponding region is determined as first watching area;
Second corresponding region is determined as second watching area.
Second aspect, provides a kind of determining device of watching area, and described device is applied to wearable device, it is described can Wearable device includes the first display component and the second display component, and first display component includes the first display screen and position In the first lens of the first display screen light emission side, second display component includes second display screen and is located at described the Second lens of two display screen light emission sides, described device include:
Module is obtained, for obtaining blinkpunkt of the first object eye currently on first display screen, first mesh Marking eye is left eye or right eye;
First determining module, for determining destination virtual according to the field angle of the blinkpunkt and the first object eye Region, the destination virtual region are to be currently located at the first object in the three-dimensional environment that the wearable device shows Region in the visual range of eye;
Second determining module, for determining first object according to the field angle of the blinkpunkt and the first object eye The virtual image, the first object virtual image are that the image that presently described first display screen is shown passes through first formed by first lens The virtual image in the virtual image, in the visual range of the first object eye;
Third determining module, for the destination virtual region to be determined as the three-dimensional ring that the wearable device shows In border, it is currently located at the region in the visual range of the second target eye, the second target eye is to remove in left eye and right eye Eyes except the first object eye;
4th determining module, for determining second according to the position in the destination virtual region and the second target eye The target virtual image, the second target virtual image are that the image that the second display screen is currently shown passes through formed by second lens The virtual image in second virtual image, in the visual range of the second target eye;
5th determining module, for determining described first according to the first object virtual image and the second target virtual image First watching area and the second target eye of the target eye in the image that first display screen is shown are described second The second watching area in the image that display screen is shown.
Optionally, first determining module, is used for:
Current visual of the first object eye is determined according to the field angle of the blinkpunkt and the first object eye Range;
The region being located in the current visual range of the first object eye in the three-dimensional environment is determined as the mesh Mark virtual region.
Optionally, the 4th determining module, is used for:
Determine that the second target eye is current according to the position in the destination virtual region and the second target eye Visual range;
Second virtual image is located at the part in the current visual range of the second target eye and is determined as described second The target virtual image.
Optionally, second determining module, is used for:
According to the determination of the field angle of the position of the first object eye, the blinkpunkt and the first object eye The current visual range of first object eye;
First virtual image is located at the part in the current visual range of the first object eye and is determined as described first The target virtual image.
The third aspect, provides a kind of wearable device, the wearable device include: watching area determining device, Image collection assembly, the first display component and the second display component;
Wherein, first display component includes the first display screen and positioned at the first of the first display screen light emission side Lens, second display component include second display screen and the second lens positioned at the second display screen light emission side;
The determining device of the watching area is device described in second aspect.
Technical solution provided in an embodiment of the present invention has the benefit that
By first object eye, currently the field angle of the blinkpunkt on the first display screen and first object eye determines mesh Virtual region is marked, and the destination virtual region is determined as to the region in the visual range of the second target eye, determines the with this The visual range of two target eyes, and then can determine that first virtual image that first object eye is currently seen and the second target eye are worked as Before second virtual image seen, so can determine that out first field of regard of the first object eye in the image that the first display screen is shown The second watching area of domain and the second target eye in the image that second display screen is shown.Since two target eyes are in display screen On watching area determine that thus the first watching area and the second watching area can be weighed accurately by the same destination virtual region It closes, solves right and left eyes watching area in the related technology and be difficult to be completely coincident and cause the display effect of image in wearable device Poor problem effectively increases the display effect of image in wearable device, improves the perception experience of user.
Detailed description of the invention
It, below will be to required in embodiment description as the technical solution being illustrated more clearly that in the embodiment of the present invention The attached drawing used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, right For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is the signal of the high-definition image for the right and left eyes determined using the determination method of watching area in the related technology Figure;
Fig. 2 is a kind of structural schematic diagram of wearable device provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of the human eye provided in an embodiment of the present invention through the image in lens viewing display screen;
Fig. 4 is a kind of method flow diagram of the determination method of watching area provided in an embodiment of the present invention;
Fig. 5 is the method flow diagram of the determination method of another watching area provided in an embodiment of the present invention;
Fig. 6 is that a kind of field angle according to blinkpunkt and first object eye provided in an embodiment of the present invention determines target void The method flow diagram in quasi- region;
Fig. 7 is a kind of schematic diagram of visual range that first object eye is current shown in the embodiment of the present invention;
Fig. 8 is that a kind of field angle according to blinkpunkt and first object eye provided in an embodiment of the present invention determines the first mesh Mark the method flow diagram of the virtual image;
Fig. 9 is that a kind of position according to destination virtual region and the second target eye provided in an embodiment of the present invention determines The method flow diagram of the two target virtual images;
Figure 10 is a kind of schematic diagram of determining watching area provided in an embodiment of the present invention;
Figure 11 is a kind of block diagram of the determining device of watching area provided in an embodiment of the present invention;
Figure 12 is a kind of structural schematic diagram of wearable device provided in an embodiment of the present invention.
Specific embodiment
Make to make the object, technical solutions and advantages of the present invention clearer, the present invention is implemented below in conjunction with attached drawing Mode is described in further detail.
Understand to facilitate reader, before describing in detail to the embodiment of the present invention, herein first to of the invention real Noun involved in example is applied to explain:
VR technology is a kind of vision even sense of hearing closing using wearable device by people to the external world, to guide user to produce A kind of technology of raw feeling in virtual three-dimensional environment.Its displaying principle is that the corresponding display screen of right and left eyes is shown respectively For the image of right and left eyes viewing, since there are parallaxes for human eye, so that brain by human eye after being got with discrepant image It produces close to true three-dimensional sense.VR technology usually realized by VR system, the VR system may include wearable device and VR host.Wherein, VR host can integrate in wearable device, or can be with the wired or wireless connection of wearable device Outer attached device.The VR host is used to carry out rendering to image and the image after rendering is sent to wearable device, wearable Equipment is for receiving and showing the image after the rendering.
Eye movement tracking (English: Eyetracking), also referred to as eyeball tracking are a kind of eye images by acquiring human eye, It analyzes the Eyeball motion information of human eye, and human eye currently watching attentively on a display screen is determined based on the Eyeball motion information The technology of point.It further, can be true according to the blinkpunkt of the human eye determined currently on a display screen in eye movement tracer technique Make the watching area of human eye currently on a display screen.
SmartView is one by combining VR technology with eyetracking technology, to realize high definition VR technology Technical solution.The technical solution includes: accurately to track the field of regard of user on a display screen by eyetracking technology first Then domain only carries out high definition rendering to the watching area, and carry out non-high definition rendering, while integrated circuit (English to other regions Text: Integrated Circuit, referred to as: IC) can be by non-high-definition image (also referred to as low clear image or the low definition of rendering Image) it is processed into high-definition picture, display is on a display screen.Wherein, the display screen can for liquid crystal display (English: Liquid Crystal Display, referred to as: LCD) screen or Organic Light Emitting Diode (English: Organic Light- Emitting Diode, referred to as: OLED) display screen etc..
Unity, also referred to as Unity engine are by joint science and technology (English: Unity Technologies) more than one of exploitation The comprehensive development of games tool of platform is the professional game engine integrated comprehensively.Unity can be used for developing VR skill Art.
It should be noted that whether user can watch high-definition image mainly to be determined by both sides factor by screen It is fixed, it is on the one hand the physical resolution of screen itself, i.e., the number of pixel on screen, currently, the mainstream occurred in the market can The simple eye resolution ratio of wearable device screen is 1080*1200;It on the other hand is the clarity of image to be displayed.Only when screen When resolution ratio is higher and the clarity of image to be displayed is higher, user can watch high-definition image by screen.Wherein, Clarity is higher to mean that VR host is needed to the image of three-dimensional environment is more refined for rendering in wearable device Rendering processing.
That is, both needing to improve the resolution ratio of screen if wanting the image for making user observe more high definition, it is also desirable to same The clarity of Shi Tigao image, and if the clarity of raising image obviously will increase the rendering pressure and the VR host of VR host Bandwidth needed for image transmitting between wearable device.Therefore, solving how to make simple eye resolution ratio 4320*4320 very Screen to higher resolution shows and always exists bottleneck in this problem of the image of more high definition.And above-mentioned smartview skill The introducing of art then solves bottleneck of the simple eye high-definition image in terms of hardware transport and software rendering to a certain extent.It should Smartview technology combination eyetracking technology, can either guarantee the high definition demand of watching area, and reduce rendering pressure Power and image transmitting bandwidth.
In the related technology, the blinkpunkt coordinate for accurately determining out eyes is ensured that, eyetracking technology needs Two cameras are set in wearable device, which can acquire eye image (the people of left eye and right eye respectively Eye image also referred to as watches point image etc. attentively), the calculating of blinkpunkt coordinate is carried out based on the eye image by VR host.
But two cameras being arranged in the technical solution considerably increase in VR system the weight of wearable device and Cost is unfavorable for the popularity of the VR system.
Also, the technical solution does not consider the visual characteristic of people: the difference being located in space due to right and left eyes Position so leads to position of the same object in the different eyeball visuals field difference so that eyes see that the visual angle of object is different, And then two images seen is caused to be not actually to be completely coincident.Therefore, if according to the eye image of right and left eyes point The blinkpunkt coordinate of right and left eyes is not calculated, and position of the blinkpunkt coordinate of the right and left eyes actually in display screen does not weigh It closes, if further determining the watching area of right and left eyes according to the blinkpunkt coordinate of the right and left eyes, leads to the blinkpunkt of the right and left eyes Region is also difficult to not to be completely coincident.
High definition rendering is carried out respectively according to blinkpunkt region of the smartview technology to the right and left eyes not being overlapped, and is caused The high definition figure of the right and left eyes of generation is also difficult to be completely coincident.As shown in Figure 1, the figure shows the blinkpunkt regions to right and left eyes The left eye high-definition image 11 and right eye high-definition image 12 obtained after high definition rendering is carried out respectively.From figure 1 it appears that left eye is high Clear image 11 and right eye high-definition image 12 only have intermediate partial region overlapping.The visual experience being presented to the user is then that user exists Its right and left eyes can see high-definition image region 13, high-definition image region 14 and high-definition image region 15 within sweep of the eye. Wherein, high-definition image region 13 is the high-definition image region that right and left eyes can be seen, and high-definition image region 14 is only left eye energy The high-definition image region seen, high-definition image region 15 are the high-definition image region that only right eye can be seen.Due to high-definition image area Domain 14 and high-definition image region 15 be only a certain eyes in eyes it can be seen that high-definition image region, when user's eyes are same When watching display screen attentively, will affect the perception experience of user, and more apparent friendship can be showed between two image-regions Boundary line further affects the perception experience of user.
The embodiment of the invention provides a kind of determination methods of watching area, can guarantee watching attentively for the right and left eyes determined Region overlapping, so that user can watch completely overlapped high-definition image, effectively increases user experience.To the party Before method is introduced, the wearable device being applied to first to this method is introduced.
The embodiment of the invention provides a kind of wearable devices.As shown in Fig. 2, the wearable device 20 may include first Display component 21 and the second display component 22, first display component 21 include the first display screen 211 and be located at this first First lens 212 of 211 light emission side of display screen, second display component 22 include second display screen 221 and be located at this second Second lens 222 of 221 light emission side of display screen.Wherein, lens (i.e. the first lens 212 and the second lens 222) are for amplification pair The image shown on the display screen (i.e. the first display screen 211 and second display screen 221) answered is more true heavy to provide for user Leaching sense.
By taking the first display component 21 as an example, as shown in figure 3, human eye observes the first display screen 211 through the first lens 212 Corresponding first virtual image 213 of the image currently shown, first virtual image 213 are usually the figure that the first display screen 211 is currently shown As amplified image.
In addition, the wearable device can also include image collection assembly, which can be eye-tracking Camera, it is aobvious which is integrated at least one of the first display screen and second display screen of wearable device Around display screen, for acquiring eye image corresponding with a certain display screen in real time, and VR host is sent it to, VR host is based on The eye image is handled to determine blinkpunkt coordinate of the human eye on the display screen.The determining device of watching area obtains Take the blinkpunkt coordinate.
The wearable device further includes the determining device of watching area, the device can by way of software or hardware knot It together in wearable device, or is incorporated into VR host, which can be used for executing the determination method of following watching areas.
Fig. 4 shows a kind of flow chart of the determination method of watching area provided in an embodiment of the present invention, and this method can be with Include the following steps:
Step 201 obtains the current blinkpunkt on the first display screen of first object eye, first object eye be left eye or Right eye.
Step 202 determines destination virtual region, destination virtual region according to the field angle of blinkpunkt and first object eye In the three-dimensional environment showed for wearable device, it is currently located at the region in the visual range of first object eye.
Step 203 determines the first object virtual image, the first object virtual image according to the field angle of blinkpunkt and first object eye Pass through in first virtual image formed by the first lens for the image that current first display screen is shown, positioned at the visual model of first object eye Enclose the interior virtual image.
Destination virtual region is determined as in the three-dimensional environment that wearable device shows by step 204, is currently located at second Region in the visual range of target eye, the second target eye are the eyes in left eye and right eye in addition to first object eye.
Step 205 determines the second target virtual image, the second target according to the position in destination virtual region and the second target eye The virtual image is the image that currently shows of second display screen by second virtual image formed by the second lens, positioned at the second target eye can Depending on the virtual image in range.
Step 206, according to the first object virtual image and the second target virtual image, determine that first object eye is shown in the first display screen Image in the second watching area in the image that second display screen is shown of the first watching area and the second target eye.
In conclusion the determination method of watching area provided in an embodiment of the present invention, by first object eye currently The field angle of blinkpunkt and first object eye on one display screen determines destination virtual region, and the destination virtual region is true It is set to the region in the visual range of the second target eye, the visual range of the second target eye is determined with this, and then can determines Second virtual image that first virtual image and the second target eye that first object eye is currently seen out are currently seen, so can determine that out First watching area and second target eye of the first object eye in the image that the first display screen is shown are aobvious in second display screen The second watching area in the image shown.Since the watching area of two target eyes on a display screen is by the same destination virtual area Domain determines, thus the first watching area and the second watching area can be accurately overlapped, and effectively increase image in wearable device Display effect, solve right and left eyes watching area in the related technology and be difficult to be completely coincident and lead to image in wearable device The poor problem of display effect, effectively increases the display effect of image in wearable device, improves the perception experience of user.
Fig. 5 shows the flow chart of the determination method of another watching area provided in an embodiment of the present invention, and this method is same Sample can be executed by the determining device of watching area, be applied to wearable device, the structure of the wearable device can refer to above-mentioned Wearable device shown in Fig. 2.This method may include steps of:
Step 301 obtains the current blinkpunkt on the first display screen of first object eye, first object eye be left eye or Right eye.
In embodiments of the present invention, eye-tracking camera shooting can be set around the first display screen of wearable device Head, the eye-tracking camera can acquire the eye image of its corresponding human eye in real time, and VR host is true according to the eye image Fixed blinkpunkt coordinate of the human eye on the first display screen.The determining device of watching area obtains the blinkpunkt coordinate.
Step 302 determines destination virtual region, destination virtual region according to the field angle of blinkpunkt and first object eye In the three-dimensional environment showed for wearable device, it is currently located at the region in the visual range of first object eye.
Optionally, as shown in fig. 6, determining the mistake in destination virtual region according to the field angle of blinkpunkt and first object eye Journey may include:
Step 3021 determines the current visual model of first object eye according to the field angle of blinkpunkt and first object eye It encloses.
The field angle of first object eye can be made of horizontal field of view angle and vertical field of view angle, positioned at the horizontal field of view of human eye Region in angle and vertical field of view angle is the visual range of first object eye.The field angle that human eye can actually reach is limited , it is however generally that, the horizontal field of view angle of human eye is up to 188 degree, and vertical field of view angle is up to 150 degree.Under normal conditions, regardless of How human eye rotates, the field angle of human eye be to maintain it is constant, according to the current blinkpunkt of human eye and the water of first object eye Rink corner and vertical field of view angle are looked squarely, then can determine the current visual range of this.Certainly, the field angle of right and left eyes there may be Difference, in the case that individual is different, the field angle of human eye may also be different, and the embodiment of the present invention is without limitation.
Fig. 7 schematically illustrate the current blinkpunkt G of first object eye O, first object eye horizontal field of view angle a, hang down Look at rink corner b and the current visual range (i.e. the area of space that point O, point P, point Q, point M and point N are surrounded) of first object eye straight Schematic diagram.
The region being located in the current visual range of first object eye in three-dimensional environment is determined as target void by step 3022 Quasi- region.
In order to provide the user with good feeling of immersion, in practical applications, the three-dimensional environment showed in wearable device Scene domain should be greater than human eye visual range.Therefore, the embodiment of the present invention in practical applications, by position in three-dimensional environment It is determined as destination virtual region in the region in the current visual range of first object eye.
It certainly, will if the scene domain of the three-dimensional environment showed in wearable device is less than the visual range of human eye Region in the current visual range of first object eye in the scene domain of three-dimensional environment is determined as destination virtual region.
Optionally, in step 3022, determine that the process in destination virtual region may include steps of:
Step A1, at least two rays are issued from the position point of first object eye, at least two articles of rays are respectively along the It projects on the boundary of the field angle of one target eye.
Unity engine can issue at least two rays from the position point of first object eye, and (ray is virtually to penetrate Line), it that is to say, the position point with first object eye has been point-rendering at least two rays, which can be with It is projected respectively along the boundary of the field angle of first object eye.
In wearable device provided in an embodiment of the present invention, in the position point and the second target eye of first object eye Position point be respectively arranged with the first virtual camera and the second virtual camera.User's right and left eyes are set by wearable The picture that the first display screen and second display screen in standby are seen is respectively from first virtual camera and the second virtual camera shooting The picture of machine shooting.
Since the position point of first object eye is the position point of the first virtual camera in wearable device, Therefore, the embodiment of the present invention in practical applications, can characterize the position of target eye, then by the position of virtual camera Unity engine can issue at least two rays from the position of the first virtual camera.
Step A2, at least two calibration points that at least two rays and virtual region are in contact are obtained.
On the extending direction of at least two rays, three-dimensional which can show with wearable device Environment, that is, virtual region contacts and forms at least two calibration points.In Unity engine, ray with physical attribute and virtual When the impinger of body surface collides, Unity engine may recognize that the coordinate of the point of impingement, the i.e. seat on dummy object surface Mark.
Step A3, by least two calibration points, area defined is determined as destination virtual region in virtual region.
In embodiments of the present invention, it may be predetermined that the geometric figure of target dotted line area, it will according to the geometric figure At least two calibration point carries out line, which is determined as destination virtual region.
Certainly, in embodiments of the present invention, can also to the line area defined further progress object identification, with Extract the effective object in the area defined, and ignore the institute surround in region invalid object (such as sky etc. back Scape), the region where the effective object is determined as destination virtual region.
Step 303 determines the first object virtual image, the first object virtual image according to the field angle of blinkpunkt and first object eye Pass through in first virtual image formed by the first lens for the image that current first display screen is shown, positioned at the visual model of first object eye Enclose the interior virtual image.
Right and left eyes pass through the first lens respectively and the second lens see first virtual image and second virtual image, first virtual image and When second virtual image is presented in face of right and left eyes simultaneously, eyes obtain first virtual image and second virtual image simultaneously, form in brain 3-D image with depth.In embodiments of the present invention, it in order to determine the first object virtual image, needs first object again Second virtual image that first virtual image and the second target arrived soon are arrived soon identifies again.
Certainly, in order to not influence the display effect of image in wearable device, first virtual image identified again and Two virtual images can be transparent.
Optionally, as shown in figure 8, determining the mistake of the first object virtual image according to the field angle of blinkpunkt and first object eye Journey may include:
Step 3031 determines the first mesh according to the field angle of the position of first object eye, blinkpunkt and first object eye Mark the current visual range of eye.
The process of step 3031 can refer to the associated description of above-mentioned steps 3021, and the embodiment of the present invention is no longer superfluous herein It states.
First virtual image is located at the part in the current visual range of first object eye and is determined as first object by step 3032 The virtual image.
Optionally, in step 3032, determine that the process of the first object virtual image may include steps of:
Step B1, at least two rays are issued from the position point of first object eye, at least two articles of rays are respectively along the It projects on the boundary of the field angle of one target eye.
The process of step B1 can refer to the associated description of above-mentioned steps A1, and details are not described herein for the embodiment of the present invention.
The purpose of step B1 be in order to which the current visual range of first object eye is characterized by way of ray, with Just the first object virtual image being located in first virtual image is accurately determined.
Step B2, at least two first contact points that at least two rays are contacted with first virtual image are obtained respectively.
On the extending direction of at least two rays, at least two rays can be contacted with first virtual image and be formed at least for this Two first contact points.
Step B3, by least two first contact point, area defined is determined as the first object virtual image.
Optionally, similar with above-mentioned steps A3, it in embodiments of the present invention, can be true according to predetermined geometric figure Determine the first object virtual image, object identification can also be carried out to the region that this is surrounded, the object that will identify that is determined as first object The virtual image.
Destination virtual region is determined as in the three-dimensional environment that wearable device shows by step 304, is currently located at second Region in the range of the field angle of target eye, the second target eye are the eyes in left eye and right eye in addition to first object eye.
Pass through the destination virtual region that will be determined according to the field angle of the blinkpunkt of first object eye and first object eye It is determined as the region in the range of the field angle of the second target eye, it is ensured that the watching area for the eyes determined is overlapped.
Step 305 determines the second target virtual image, the second target according to the position in destination virtual region and the second target eye The virtual image is that the image that second display screen is currently shown passes through in second virtual image formed by the second lens, positioned at the view of the second target eye The virtual image in the range of rink corner.
Optionally, as shown in figure 9, determining the second target virtual image according to the position in destination virtual region and the second target eye Process may include:
Step 3051, determined according to the position in destination virtual region and the second target eye the second target eye currently this three Tie up the visual range in environment.
From the position point of the second target eye issue at least two rays, at least two rays respectively with surround target void At least two calibration points connection in quasi- region.Then the position point of the second target eye and at least two calibration point are surrounded Area of space be the current visual range in the three-dimensional environment of the second target eye.The second target eye is currently in the three-dimensional ring Visual range in border is the segment space region in the current visual range of the second target eye.
Similar with above-mentioned steps 3022, the position point that Unity engine can control the second target eye issues at least two Ray, that is to say, using the position point of the second target eye as starting point, using at least two calibration points as terminal, draw at least Two rays.The position point of second target eye is the position point of the second virtual camera in wearable device.
Second virtual image is located at the second target eye currently part in the visual range in the three-dimensional environment by step 3052 It is determined as the second target virtual image.
Optionally, determine that the process of the second target virtual image may include steps of in step 3052:
Step C1, at least two second contact points that at least two rays are contacted with second virtual image are obtained respectively.This extremely On the extending direction of few two rays, at least two rays can contact with the second dotted line and form at least two second and contact for this Point.
Step C2, at least two second contact point area defined is determined as the second target virtual image.
Optionally, similar with above-mentioned steps A3, it in embodiments of the present invention, can be true according to predetermined geometric figure The fixed second target virtual image can also carry out object identification to the region that this is surrounded, and the object that will identify that is determined as the second target The virtual image.
It should be noted that the consistency in order to guarantee object that right and left eyes are observed, in step B3 and step D2 In, identical algorithm and identical algorithm parameter should be taken when carrying out object identification to the region surrounded, to ensure to identify Object out is consistent.
Step 306 obtains first corresponding region of the first object virtual image in the image that the first display screen is shown, Yi Ji Second corresponding region of the two target virtual images in the image that the second display screen is shown.
Respectively convert at least two first contact points and at least two second contact points to shown by the first display screen Image at least two first picture points and second display screen shown by least two second images in image Point.
Due to the physical characteristic of lens, user that is to say user in viewing target when through lens viewing target image When the target virtual image that image is in, which produces distortion compared to target image, in order to avoid user sees distortion Target image, need in advance to carry out the target image by the way of anti-distortion grid anti-distortion processing.The anti-distortion net Record has the corresponding relationship of virtual image coordinate and image coordinate in lattice.
In embodiments of the present invention, at least two first contact point and at least two second contact point are to be located at void Virtual image coordinate as in, at least two first picture point and at least two second picture points are in the image shown in screen Image coordinate (image coordinate shown in the coordinate of screen and screen is corresponding) therefore sat based on the virtual image in anti-distortion grid The corresponding relationship of mark and image coordinate, can convert at least two first contact point and at least two second contact points to In image shown by least two first picture points and second display screen in image shown by first display screen extremely Few two the second picture points.
The first corresponding region is determined according at least two first picture points, optionally, by least two first picture point Area defined is determined as the first corresponding region, alternatively, can carry out object identification to the region that this is surrounded, will identify that Object is determined as the first corresponding region;The second corresponding region is determined according at least two second picture points, optionally, at least by this Two the second picture point area defined are determined as the second corresponding region, alternatively, can carry out object to the region that this is surrounded Identification, the object that will identify that are determined as the second corresponding region.
First corresponding region is determined as the first watching area by step 307.
Second corresponding region is determined as the second watching area by step 308.
In conclusion the determination method of watching area provided by the embodiments of the present application, by first object eye currently The field angle of blinkpunkt and first object eye on one display screen determines destination virtual region, and the destination virtual region is true It is set to the region in the visual range of the second target eye, the visual range of the second target eye is determined with this, and then can determines Second virtual image that first virtual image and the second target eye that first object eye is currently seen out are currently seen, so can determine that out First watching area and second target eye of the first object eye in the image that the first display screen is shown are aobvious in second display screen The second watching area in the image shown.Since the watching area of two target eyes on a display screen is by the same destination virtual area Domain determines, thus the first watching area and the second watching area can be accurately overlapped, and solve left and right eye fixation in the related technology Region is difficult to be completely coincident and the problem that causes the display effect of image in wearable device poor, effectively increases wearable set The display effect of standby middle image improves the perception experience of user.
Further, it in the determination method of the watching area described in the embodiment of the present invention, in step 301, obtains The blinkpunkt of the first object eye arrived currently on a display screen can be completed by an eye-tracking camera, therefore, using this In the wearable device of the determination method of watching area provided by inventive embodiments, an eye-tracking camera shooting can be only set Head, compared to, needing that eye-tracking camera is respectively set to right and left eyes in the related technology, by the people for acquiring right and left eyes respectively Eye image, analyzes the blinkpunkt of right and left eyes, to determine the watching area of right and left eyes, field of regard provided by the embodiment of the present invention The weight and cost of wearable device can be effectively reduced in the determination method in domain, is conducive to the popularity of wearable device.
It should be noted that the sequence of above-mentioned steps can be adjusted according to actual needs, for example, step 307 and step Rapid 308 may be performed simultaneously or first carry out step 308 executes step 307 again, for another example step 303 and step 304 can be same Shi Zhihang first carries out step 304 and executes step 303 again.
Above-described embodiment is further described below in conjunction with Figure 10.By taking first object eye is left eye as an example, field of regard The determination method in domain the following steps are included:
Step S1, the currently blinkpunkt S on the first display screen 211 of first object eye 213 is obtained.
Step S2, the field angle α of first object eye 213 is determined according to blinkpunkt S.
In the present embodiment, it is illustrated so that field angle is horizontal field of view angle as an example.
Step S3, two rays are issued from the position point of first object eye 213, two rays are respectively along the first mesh The boundary for marking the field angle α of eye 213 is projected, and calibration point S1 and calibration that two rays and virtual region 23 are in contact are obtained Two calibration points area defined in virtual region 23 is determined as destination virtual region by point S2.
In the present embodiment, by taking the region between calibration point S1 and the line of calibration point S2 indicates destination virtual region as an example It is illustrated.
Step S4, two rays issued from the position point of first object eye 213 and first virtual image 214 are obtained respectively The first contact point C ' and first contact point A ' of contact determine first object void according to first contact point C ' and first contact point A ' Picture.
In the present embodiment, first object is indicated with the region between first contact point C ' and the line of first contact point A ' It is illustrated for the virtual image.
Step S5, destination virtual region is determined as in the three-dimensional environment that wearable device shows, is currently located at second Region in the range of the field angle β of target eye 223.
Step S6, from the position point of the second target eye 223 issue two rays, two rays respectively with surround mesh The calibration point S1 and calibration point S2 of mark virtual region are separately connected, and obtain what two rays were contacted with second virtual image 224 respectively Second contact point D ' and the second contact point B ' determines the second target virtual image according to the second contact point D ' and the second contact point B '.
In the present embodiment, the second target is indicated with the region between the second contact point D ' and the line of the second contact point B ' It is illustrated for the virtual image.
Step S7, respectively by first contact point C ' and first contact point A ' and the second contact point D ' and the second contact point The the first picture point C and the first picture point A and second display screen that B ' is converted into image shown by the first display screen are shown The second picture point D and the second picture point B in the image shown determine the first note according to the first picture point C and the first picture point A Viewed area determines the second watching area according to the second picture point D and the second picture point B.
It should be noted that the embodiment of the present invention is in actual implementation, first virtual image 214 and second virtual image 224 are overlappings , but for the ease of the explanation of the determination method to watching area, in Fig. 9 not by first virtual image 214 and second virtual image 224 It is depicted as overlap condition.In addition, for for indicating the calibration point S1 and calibration point S2 and blinkpunkt S etc. in destination virtual region It is to schematically illustrate.
Figure 11 shows a kind of determining device 30 of watching area provided in an embodiment of the present invention, which can apply In wearable device, which can refer to structure shown in Fig. 2, and the determining device 30 of the watching area includes:
Module 301 is obtained, for obtaining blinkpunkt of the first object eye currently on first display screen, described first Target eye is left eye or right eye;
First determining module 302, for determining target according to the field angle of the blinkpunkt and the first object eye Virtual region, the destination virtual region are to be currently located at described first in the three-dimensional environment that the wearable device shows Region in the visual range of target eye;
Second determining module 303, for according to the determination of the field angle of the blinkpunkt and the first object eye The first object virtual image, the first object virtual image pass through the first lens institute for the image that presently described first display screen is shown At first virtual image in, the virtual image in the visual range of the first object eye;
Third determining module 304, three for the destination virtual region to be determined as that the wearable device shows It ties up in environment, is currently located at the region in the visual range of the second target eye, the second target eye is left eye and right eye In eyes in addition to the first object eye;
4th determining module 305, for being determined according to the destination virtual region and the position of the second target eye The second target virtual image, the second target virtual image pass through the second lens institute for the image that the second display screen is currently shown At second virtual image in, the virtual image in the visual range of the second target eye;
5th determining module 306, for according to the first object virtual image and the second target virtual image, determining described the First watching area and the second target eye of the one target eye in the image that first display screen is shown are described The second watching area in the image that two display screens are shown.
In conclusion passing through the visual field of first object the eye currently blinkpunkt on the first display screen and first object eye Angle determines destination virtual region, and the destination virtual region is determined as to the region in the visual range of the second target eye, with this It determines the visual range of the second target eye, and then can determine first virtual image and second that first object eye is currently seen Second virtual image that target eye is currently seen so can determine that out of first object eye in the image that the first display screen is shown The second watching area of one watching area and the second target eye in the image that second display screen is shown.Due to two target eyes Watching area on a display screen is determined by the same destination virtual region, thus the first watching area and the second watching area can To be accurately overlapped, solves right and left eyes watching area in the related technology and be difficult to be completely coincident and lead to image in wearable device The poor problem of display effect, effectively increases the display effect of image in wearable device, improves the perception experience of user.
Optionally, the first determining module 302, is used for:
Current visual of the first object eye is determined according to the field angle of the blinkpunkt and the first object eye Range;
The region being located in the current visual range of the first object eye in the three-dimensional environment is determined as the mesh Mark virtual region.
Optionally, the 4th determining module 305 is used for:
Determine that the second target eye is current according to the position in the destination virtual region and the second target eye Visual range;
Second virtual image is located at the part in the current visual range of the second target eye and is determined as described second The target virtual image.
Optionally, the second determining module 303, is used for:
According to the determination of the field angle of the position of the first object eye, the blinkpunkt and the first object eye The current visual range of first object eye;
First virtual image is located at the part in the current visual range of the first object eye and is determined as described first The target virtual image.
In conclusion passing through the visual field of first object the eye currently blinkpunkt on the first display screen and first object eye Angle determines destination virtual region, and the destination virtual region is determined as to the region in the visual range of the second target eye, with this It determines the visual range of the second target eye, and then can determine first virtual image and second that first object eye is currently seen Second virtual image that target eye is currently seen so can determine that out of first object eye in the image that the first display screen is shown The second watching area of one watching area and the second target eye in the image that second display screen is shown.Due to two target eyes Watching area on a display screen is determined by the same destination virtual region, thus the first watching area and the second watching area can To be accurately overlapped, solves right and left eyes watching area in the related technology and be difficult to be completely coincident and lead to image in wearable device The poor problem of display effect, effectively increases the display effect of image in wearable device, improves the perception experience of user.
Figure 12 shows the structural schematic diagram of another wearable device 20 provided in an embodiment of the present invention, this is wearable to set Standby 20 include determining device 24, image collection assembly 23, the first display component 21 and the second display component 22 of watching area.
Wherein, the determining device 24 of watching area can be the determining device 30 of watching area shown in Fig. 10, and image is adopted Collect component 23, the first display component 21 and the second display component 22 can refer to foregoing description, the embodiment of the present invention is herein not It repeats again.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description It with the specific work process of device, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In the present invention, term " first ", " second ", " third " and " the 4th " are used for description purposes only, and cannot understand For indication or suggestion relative importance.Term " multiple " refers to two or more, unless otherwise restricted clearly.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of determination method of watching area, which is characterized in that the method is applied to wearable device, described wearable to set Standby includes the first display component and the second display component, and first display component includes the first display screen and is located at described First lens of the first display screen light emission side, second display component include second display screen and show positioned at described second Shield the second lens of light emission side, which comprises
The first object eye currently blinkpunkt on first display screen is obtained, the first object eye is left eye or the right side Eye;
Destination virtual region, the destination virtual region are determined according to the field angle of the blinkpunkt and the first object eye In the three-dimensional environment showed for the wearable device, it is currently located at the region in the visual range of the first object eye;
The first object virtual image, the first object virtual image are determined according to the field angle of the blinkpunkt and the first object eye The image shown for presently described first display screen is by being located at first mesh in first virtual image formed by first lens Mark the virtual image in the visual range of eye;
The destination virtual region is determined as in the three-dimensional environment that the wearable device shows, is currently located at described second Region in the visual range of target eye, the second target eye are the eye in left eye and right eye in addition to the first object eye Eyeball;
The second target virtual image, second target are determined according to the position in the destination virtual region and the second target eye The virtual image is that the image that currently shows of the second display screen passes through in second virtual image formed by second lens, positioned at described the The virtual image in the visual range of two target eyes;
According to the first object virtual image and the second target virtual image, determine the first object eye in first display screen The first watching area and the second target eye in the image of display in the image that the second display screen is shown Two watching areas.
2. the method according to claim 1, wherein described according to the blinkpunkt and the first object eye Field angle determine destination virtual region, comprising:
The current visual range of the first object eye is determined according to the field angle of the blinkpunkt and the first object eye;
It is empty that the region being located in the current visual range of the first object eye in the three-dimensional environment is determined as the target Quasi- region.
3. method according to claim 1 or 2, which is characterized in that described according to the destination virtual region and described The position of second target eye determines the second target virtual image, comprising:
Determine the second target eye currently described according to the position in the destination virtual region and the second target eye Visual range in three-dimensional environment;
Second virtual image is located at part of the second target eye currently in the visual range in the three-dimensional environment to determine For the second target virtual image.
4. the method according to claim 1, wherein described according to the blinkpunkt and the first object eye Field angle determine the first object virtual image, comprising:
Described first is determined according to the field angle of the position of the first object eye, the blinkpunkt and the first object eye The current visual range of target eye;
First virtual image is located at the part in the current visual range of the first object eye and is determined as the first object The virtual image.
5. according to the method described in claim 4, it is characterized in that, described according to the first object virtual image and second mesh The virtual image is marked, determines first watching area of the first object eye in the image that first display screen is shown and described Second watching area of the second target eye in the image that the second display screen is shown, comprising:
Obtain first corresponding region and described of the first object virtual image in the image that first display screen is shown Second corresponding region of the two target virtual images in the image that the second display screen is shown;
First corresponding region is determined as first watching area;
Second corresponding region is determined as second watching area.
6. a kind of determining device of watching area, which is characterized in that described device is applied to wearable device, described wearable to set Standby includes the first display component and the second display component, and first display component includes the first display screen and is located at described First lens of the first display screen light emission side, second display component include second display screen and show positioned at described second Shield the second lens of light emission side, described device includes:
Module is obtained, for obtaining blinkpunkt of the first object eye currently on first display screen, the first object eye For left eye or right eye;
First determining module, for determining destination virtual area according to the field angle of the blinkpunkt and the first object eye Domain, the destination virtual region are to be currently located at the first object eye in the three-dimensional environment that the wearable device shows Visual range in region;
Second determining module, for determining first object void according to the field angle of the blinkpunkt and the first object eye Picture, the first object virtual image are that the image that presently described first display screen is shown is empty by formed by first lens first As in, the virtual image in the visual range of the first object eye;
Third determining module, for the destination virtual region to be determined as the three-dimensional environment that the wearable device shows In, it is currently located at the region in the visual range of the second target eye, the second target eye is removes institute in left eye and right eye State the eyes except first object eye;
4th determining module, for determining the second target according to the position in the destination virtual region and the second target eye The virtual image, the second target virtual image are that the image that the second display screen is currently shown passes through second formed by second lens The virtual image in the virtual image, in the visual range of the second target eye;
5th determining module, for determining the first object according to the first object virtual image and the second target virtual image First watching area and the second target eye of the eye in the image that first display screen is shown are in second display Shield the second watching area in the image of display.
7. device according to claim 6, which is characterized in that first determining module is used for:
The current visual range of the first object eye is determined according to the field angle of the blinkpunkt and the first object eye;
It is empty that the region being located in the current visual range of the first object eye in the three-dimensional environment is determined as the target Quasi- region.
8. device according to claim 6 or 7, which is characterized in that the 4th determining module is used for:
Current visual of the second target eye is determined according to the position in the destination virtual region and the second target eye Range;
Second virtual image is located at the part in the current visual range of the second target eye and is determined as second target The virtual image.
9. device according to claim 8, which is characterized in that second determining module is used for:
Described first is determined according to the field angle of the position of the first object eye, the blinkpunkt and the first object eye The current visual range of target eye;
First virtual image is located at the part in the current visual range of the first object eye and is determined as the first object The virtual image.
10. a kind of wearable device, which is characterized in that the wearable device includes: that the determining device of watching area, image are adopted Collect component, the first display component and the second display component;
Wherein, first display component include the first display screen and positioned at the first display screen light emission side first thoroughly Mirror, second display component include second display screen and the second lens positioned at the second display screen light emission side;
The determining device of the watching area is any device of claim 6 to 9.
CN201910333506.7A 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device Active CN109901290B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910333506.7A CN109901290B (en) 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device
PCT/CN2020/080961 WO2020215960A1 (en) 2019-04-24 2020-03-24 Method and device for determining area of gaze, and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910333506.7A CN109901290B (en) 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device

Publications (2)

Publication Number Publication Date
CN109901290A true CN109901290A (en) 2019-06-18
CN109901290B CN109901290B (en) 2021-05-14

Family

ID=66956250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910333506.7A Active CN109901290B (en) 2019-04-24 2019-04-24 Method and device for determining gazing area and wearable device

Country Status (2)

Country Link
CN (1) CN109901290B (en)
WO (1) WO2020215960A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347265A (en) * 2019-07-22 2019-10-18 北京七鑫易维科技有限公司 Render the method and device of image
WO2020215960A1 (en) * 2019-04-24 2020-10-29 京东方科技集团股份有限公司 Method and device for determining area of gaze, and wearable device
CN113467619A (en) * 2021-07-21 2021-10-01 腾讯科技(深圳)有限公司 Picture display method, picture display device, storage medium and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11474598B2 (en) * 2021-01-26 2022-10-18 Huawei Technologies Co., Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130335302A1 (en) * 2012-06-18 2013-12-19 Randall T. Crane Selective illumination
CN105425399A (en) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 Method for rendering user interface of head-mounted equipment according to human eye vision feature
CN106233187A (en) * 2014-04-25 2016-12-14 微软技术许可有限责任公司 There is the display device of light modulation panel
CN107797280A (en) * 2016-08-31 2018-03-13 乐金显示有限公司 Personal immersion display device and its driving method
CN109031667A (en) * 2018-09-01 2018-12-18 哈尔滨工程大学 A kind of virtual reality glasses image display area horizontal boundary localization method
US20190018483A1 (en) * 2017-07-17 2019-01-17 Thalmic Labs Inc. Dynamic calibration systems and methods for wearable heads-up displays

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10129538B2 (en) * 2013-02-19 2018-11-13 Reald Inc. Method and apparatus for displaying and varying binocular image content
JP6509101B2 (en) * 2015-12-09 2019-05-08 Kddi株式会社 Image display apparatus, program and method for displaying an object on a spectacle-like optical see-through type binocular display
US10429647B2 (en) * 2016-06-10 2019-10-01 Facebook Technologies, Llc Focus adjusting virtual reality headset
CN108369744B (en) * 2018-02-12 2021-08-24 香港应用科技研究院有限公司 3D gaze point detection through binocular homography mapping
CN109087260A (en) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 A kind of image processing method and device
CN109901290B (en) * 2019-04-24 2021-05-14 京东方科技集团股份有限公司 Method and device for determining gazing area and wearable device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130335302A1 (en) * 2012-06-18 2013-12-19 Randall T. Crane Selective illumination
CN106233187A (en) * 2014-04-25 2016-12-14 微软技术许可有限责任公司 There is the display device of light modulation panel
CN105425399A (en) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 Method for rendering user interface of head-mounted equipment according to human eye vision feature
CN107797280A (en) * 2016-08-31 2018-03-13 乐金显示有限公司 Personal immersion display device and its driving method
US20190018483A1 (en) * 2017-07-17 2019-01-17 Thalmic Labs Inc. Dynamic calibration systems and methods for wearable heads-up displays
CN109031667A (en) * 2018-09-01 2018-12-18 哈尔滨工程大学 A kind of virtual reality glasses image display area horizontal boundary localization method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020215960A1 (en) * 2019-04-24 2020-10-29 京东方科技集团股份有限公司 Method and device for determining area of gaze, and wearable device
CN110347265A (en) * 2019-07-22 2019-10-18 北京七鑫易维科技有限公司 Render the method and device of image
CN113467619A (en) * 2021-07-21 2021-10-01 腾讯科技(深圳)有限公司 Picture display method, picture display device, storage medium and electronic equipment
CN113467619B (en) * 2021-07-21 2023-07-14 腾讯科技(深圳)有限公司 Picture display method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2020215960A1 (en) 2020-10-29
CN109901290B (en) 2021-05-14

Similar Documents

Publication Publication Date Title
US20210075963A1 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
CN109901290A (en) The determination method, apparatus and wearable device of watching area
CN106796344B (en) System, arrangement and the method for the enlarged drawing being locked on object of interest
US20160267720A1 (en) Pleasant and Realistic Virtual/Augmented/Mixed Reality Experience
CN106791784B (en) A kind of the augmented reality display methods and device of actual situation coincidence
US11577159B2 (en) Realistic virtual/augmented/mixed reality viewing and interactions
CN108292489A (en) Information processing unit and image generating method
CN105303557B (en) A kind of see-through type intelligent glasses and its perspective method
JP2017199379A (en) Tracking display system, tracking display program, tracking display method, wearable device using the same, tracking display program for wearable device, and manipulation method for wearable device
CN106327584B (en) Image processing method and device for virtual reality equipment
TWI669635B (en) Method and device for displaying barrage and non-volatile computer readable storage medium
CN110708533B (en) Visual assistance method based on augmented reality and intelligent wearable device
CN106959759A (en) A kind of data processing method and device
CN108124150B (en) The method that virtual reality wears display equipment and observes real scene by it
CN111556305B (en) Image processing method, VR device, terminal, display system and computer-readable storage medium
CN108398787B (en) Augmented reality display device, method and augmented reality glasses
CN105404395B (en) Stage performance supplemental training method and system based on augmented reality
CN105959665A (en) Panoramic 3D video generation method for virtual reality equipment
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
CN109766007A (en) A kind of the blinkpunkt compensation method and compensation device, display equipment of display equipment
CN107065164B (en) Image presentation method and device
CN105138130B (en) Strange land is the same as information interchange indicating means and system in scene
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
US10255676B2 (en) Methods and systems for simulating the effects of vision defects
CN207216145U (en) A kind of Wearable

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant