CN106249413B - A kind of virtual dynamic depth variable processing method of simulation human eye focusing - Google Patents

A kind of virtual dynamic depth variable processing method of simulation human eye focusing Download PDF

Info

Publication number
CN106249413B
CN106249413B CN201610671842.9A CN201610671842A CN106249413B CN 106249413 B CN106249413 B CN 106249413B CN 201610671842 A CN201610671842 A CN 201610671842A CN 106249413 B CN106249413 B CN 106249413B
Authority
CN
China
Prior art keywords
depth
virtual
time
virtual reality
focusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610671842.9A
Other languages
Chinese (zh)
Other versions
CN106249413A (en
Inventor
王欣捷
蔡志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ying Mo Science And Technology Ltd
Original Assignee
Hangzhou Ying Mo Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ying Mo Science And Technology Ltd filed Critical Hangzhou Ying Mo Science And Technology Ltd
Priority to CN201610671842.9A priority Critical patent/CN106249413B/en
Publication of CN106249413A publication Critical patent/CN106249413A/en
Application granted granted Critical
Publication of CN106249413B publication Critical patent/CN106249413B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of virtual dynamic depth variable processing methods of simulation human eye focusing.It obtains eyes eyeball and stares position in virtual reality scenario, position is stared according to eyes eyeball, the image of frame each in virtual reality scenario is carried out in real time virtually to stare the expectation focal distance between object and virtual lens under processing acquisition present frame, it is calculated further according to the internal reference parameter of desired focal distance and virtual lens and obtains Dispersive spot radius, then blur effect is constructed according to Dispersive spot radius, blur effect is added in the present frame of former virtual reality scenario and is shown, completes depth of field change process.The present invention is capable of the focal distance of the aobvious eyes of user of dynamic monitoring virtual reality head, and then Dispersive spot radius can be calculated in real time, by adding fuzzy special efficacy to original image, effectively the Deep Canvas of things can be observed in real life by simulation human eye, greatly promote the immersion experience of user.

Description

A kind of virtual dynamic depth variable processing method of simulation human eye focusing
Technical field
The present invention relates to Graphics and Image Processing more particularly to a kind of virtual dynamic depths of simulation human eye focusing Variable processing method, suitable for display screen or virtual reality device.
Background technique
With the development of hardware technology and software technology, virtual reality head shows (head-mounted displays, HMDs) Technology has begun preferred unit universal to consumer, and becoming simulation immersion experience.In the immersive experience, maximum It is the key that improve virtual reality head to show user experience that simulation reduction human eye observes the mode of things in real life in degree.
In actual life, there is Deep Canvas when eye-observation things, i.e., it is a certain range of only before and after focus point Object can form clearly image in human eye, and the too far away or too close object of the focus point apart from human eye appears to be mould in human eye Paste.The depth of field is to simulate the important means of human eye vision effect, can effectively improve the aobvious immersion experience effect of virtual reality head Fruit.However the aobvious function of not being provided with dynamic depth perception of existing virtual reality head, be presented to human eye image no matter Distance is all clearly that this greatly reduces user experiences.
Summary of the invention
In order to solve the problems, such as background technique, the present invention provides a kind of virtual dynamic scapes of simulation human eye focusing Deep variable processing method, in the auto-focusing of display screen or virtual reality device upper mold personification eye, the picture depth of field to change Process.
The technical solution adopted by the present invention is that as shown in Figure 1, comprising the following steps:
1) during people watches the picture of virtual reality scenario, eyes eyeball staring in virtual reality scenario is obtained Position;
2) according to the position of staring of eyes eyeball, processing acquisition is carried out to the image of frame each in virtual reality scenario in real time The expectation focal distance Depth between object and virtual lens is virtually stared under present framedesired(focal distance);
3) further according to desired focal distance DepthdesiredIt is calculated with the internal reference parameter of virtual lens and obtains blur circle (CoC) Then radius constructs blur effect according to Dispersive spot radius, blur effect is added in the present frame of former virtual reality scenario It is shown, completes depth of field change process.
The step 1) specifically: in people using in virtual reality device, shown in the display screen of virtual reality device empty Quasi- scenic picture, presets the internal reference parameter of the virtual lens such as virtual lens aperture size and lens focus, and in human eye Eye-tracking devices are arranged in front, and Eye-tracking devices can be located near the top of display screen, pass through Eye-tracking devices reality When obtain eyes eyeballs and stare location information.
The step 1) specifically: the position of gaze of the eyes eyeball is set to the center's point of screen of display screen.
The step 2) is specific for the virtual reality scenario image of each frame as shown in Fig. 2, being:
2.1) location information of staring of eyes eyeball is transformed into the two-dimensional coordinate that present frame shows picture, then two dimension is sat Mark is converted into the space three-dimensional landing point coordinates Pos of present frame virtual reality scenario;
2.2) it finds corresponding to space three-dimensional landing point coordinates Pos and virtually to stare pair in present frame virtual reality scenario As obtaining the depth distance Depth virtually stared between object and virtual lens under present framecurrent
2.3) pass through depth distance DepthcurrentAnd previous frame virtually stares the expectation pair between object and virtual lens Defocus distance Depth 'desiredIt is calculated, acquires the first heavy focusing time Time1refocus(unit is millisecond);
If 2.4) Time1refocusIt is less thanMillisecond, then will virtually be stared under present frame object and virtual lens it Between expectation focal distance DepthdesiredAnd previous frame virtually stares the expectation focal distance between object and virtual lens Depth′desiredValue be set as and depth distance DepthcurrentIdentical, the value of setting present frame flag bit flag is false, no Depth of field change process is carried out to present frame, and terminates the processing to present frame;
If Time1refocusIt is more than or equal toMillisecond, then continue next step;
It 2.5) whether is shown after being superimposed by blur effect by the display of previous frame virtual reality scenario image Carry out judgement processing:
A) if the display of previous frame virtual reality scenario image is shown after being superimposed by blur effect, i.e. present frame mark The value of will position flag is false, then is true by the value of present frame flag bit setting flag, according to the first heavy focusing time Time1refocusThe parameter in virtual reality scenario is calculated, depth of field change process is not carried out to present frame, and terminate to present frame Processing;
B) if the display of previous frame virtual reality scenario image is shown after being superimposed by blur effect, i.e. flag bit The value of flag is true, then calculates and acquire the second heavy focusing time Time2refocus, then to the second heavy focusing time Time2refocusJudged:
If the second heavy focusing time Time2refocusIt is less thanMillisecond, then by present frame flag bit setting flag's Value is true, according to the second heavy focusing time Time2refocusThe parameter in virtual reality scenario is calculated, present frame is not carried out Depth of field change process, and terminate the processing to present frame;
If the second heavy focusing time Time2refocusIt is more than or equal toMillisecond, then will virtually stare the end of object Depth distance DepthendValue be set as and depth distance DepthcurrentIt is identical, and acquired using following formula calculating Depthdesired
Depthdesired=(Timenow-Timestart)×Δd+Depthstart
Wherein, TimenowAt the time of indicating present frame place, TimestartIndicate focusing time started, DepthstartIt indicates Virtually stare the beginning depth distance of object.
The step 2.3) first is focusing time Time1 heavyrefocusCalculating use following formula:
Wherein, Δ t indicates that the attainable minimum focusing time of human eye physiological structure, Δ t are constant, and specific implementation can be 40 Millisecond.
The step B) the second heavy focusing time Time2refocusCalculating use following formula:
Wherein, Δ t indicates that the attainable minimum focusing time of human eye physiological structure, Δ t are constant, and specific implementation can be 40 Millisecond.
Parameter in the virtual reality scenario includes focusing time started Timestart, virtually stare the beginning of object Depth distance Depthstart, virtually stare the end depth distance Depth of objectendWith the parameter of unit depth distance, delta d.
The focusing time started Timestart, virtually stare the beginning depth distance Depth of objectstart, it is virtual solidifying Depending on the end depth distance Depth of objectendIt is specifically to be calculated using the following equation acquisition with unit depth distance, delta d:
Timestart=Timenow
Depthstart=Depth 'desired
Depthend=Depthcurrent
Wherein, TimenowAt the time of indicating present frame place, TimerefocusAttach most importance to focusing time, specially first is again right Burnt time Time1refocusOr the second heavy focusing time Time2refocus
Described virtually stares the three dimensional virtual models that object is virtual reality scenario.
The beneficial effects of the present invention are:
Method proposed by the present invention is capable of the focal distance of the aobvious eyes of user of dynamic monitoring virtual reality head, and then can be real-time Calculate Dispersive spot radius, by adding fuzzy special efficacy to original image, can effectively simulation human eye observe in real life The Deep Canvas of things greatly promotes the immersion experience of user.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the flow chart that the method for the present invention step (2) seek desired focal distance.
Fig. 3 is the depth of field variation diagram before focusing in a focus process.
Fig. 4 is in a focus process to defocused depth of field variation diagram.
Fig. 5 is the state diagram of one of dynamic depth change procedure in a focus process.
Fig. 6 is two state diagram of the dynamic depth change procedure in a focus process.
Fig. 7 is three state diagram of the dynamic depth change procedure in a focus process.
Fig. 8 is four state diagram of the dynamic depth change procedure in a focus process.
Specific embodiment
Technical solution of the present invention is described in further details with implementation technology with reference to the accompanying drawing, following embodiment not structure At limitation of the invention.
The embodiment of the present invention and its implementation process are as follows:
1) it is used in virtual reality device in people, the display software using Unity3D engine as three-dimensional scenic, virtual The display screen of real world devices shows virtual scene picture, presets internal reference parameter (aperture size 0.3, the mirror of virtual lens Head focal length 35mm), display refresh rates 60Hz.And Eye-tracking devices are set in the front of human eye, Eye-tracking devices can Near the top of display screen, obtain eyes eyeball in real time by Eye-tracking devices stares location information.
2) according to the position of staring of eyes eyeball, the image of frame each in virtual reality scenario is handled in real time:
2.1) location information of staring of eyes eyeball is transformed into the two-dimensional coordinate that present frame shows picture, then two dimension is sat Mark is converted into the space three-dimensional landing point coordinates Pos of present frame virtual reality scenario.
2.2) virtual reality scenario corresponding to space three-dimensional landing point coordinates Pos is found in present frame virtual reality scenario Three dimensional virtual models as virtually staring object, if Fig. 4 is present frame, and human eye stares object in this frame from second row Basketball at Far Left becomes the nearest basketball of first row in Fig. 7, and object (first row is virtually stared under present frame then can calculate Nearest basketball) and virtual lens between depth distance Depthcurrent(1 meter).
2.3) pass through depth distance DepthcurrentAnd previous frame virtually stares the expectation pair between object and virtual lens Defocus distance Depth 'desired(10 meters) are calculated, and in general, the unit focusing time Δ t of human eye is 40 milliseconds, but this reality It applies example to observe for convenience, the value of Δ t is arranged slightly larger, i.e., Δ t value is 200 milliseconds in calculating.Acquire One heavy focusing time Time1refocusFor 180 (unit is millisecond)
If 2.4) Time1refocusLess than 17 milliseconds (1/60Hz=17 milliseconds), then will virtually be stared under present frame object with Expectation focal distance Depth between virtual lensdesiredAnd previous frame virtually stares the expectation between object and virtual lens Focal distance Depth 'desiredValue be set as and depth distance DepthcurrentIdentical, the value of setting present frame flag bit flag is False does not carry out depth of field change process to present frame, and terminates the processing to present frame;
If Time1refocusMore than or equal to 17 milliseconds, then continue next step.Due to Time1 in the present embodimentrefocus= 32, so continuing next step.
It 2.5) whether is shown after being superimposed by blur effect by the display of previous frame virtual reality scenario image Carry out judgement processing:
A) if the display of previous frame virtual reality scenario image is shown after being superimposed by blur effect, i.e. present frame mark The value of will position flag is false, then is true by the value of present frame flag bit setting flag, with the first heavy focusing time Time1refocusAs weight focusing time TimerefocusCalculate the focusing time started Time in virtual reality scenariostart, it is virtual Stare the beginning depth distance Depth of objectstart, virtually stare the end depth distance Depth of objectendWith unit depth away from It from parameters such as Δ d and covers, depth of field change process is not carried out to present frame, and terminate the processing to present frame;Due in this reality Flag=false in example is applied, so terminating the processing to present frame, passes directly to step (3).
B) if the display of previous frame virtual reality scenario image is shown after being superimposed by blur effect, i.e. flag bit The value of flag is true, then calculates and acquire the second heavy focusing time Time2refocus, Δ t is 40 milliseconds in calculating, then to the Double focusing time Time2refocusJudged:
If the second heavy focusing time Time2refocusLess than 17 milliseconds, then it is by the value of present frame flag bit setting flag True, and with the second heavy focusing time Time2refocusAs weight focusing time TimerefocusCalculate pair in virtual reality scenario Burnt time started Timestart, virtually stare the beginning depth distance Depth of objectstart, virtually stare the end depth of object Distance DepthendWith the parameters such as unit depth distance, delta d and cover, depth of field change process is not carried out to present frame, and terminate pair The processing of present frame.
If the second heavy focusing time Time2refocusMore than or equal to 17 milliseconds, then will virtually stare the end depth of object away from From DepthendValue be set as and depth distance DepthcurrentIt is identical, and calculate and acquire Depthdesired
3) algorithm (McIntosh, L., Bernhard E.Riecke, the and Steve of L.McIntosh et al. is utilized DiPaola."Efficiently Simulating the Bokeh of Polygonal Apertures in a Post‐ Process Depth of Field Shader."Computer Graphics Forum.Vol.31.No.6.Blackwell Publishing Ltd, 2012.), further according to desired focal distance DepthdesiredInternal reference parameter calculating with virtual lens obtains Blur circle (CoC) radius, blur effect is then constructed according to Dispersive spot radius, blur effect is added to former virtual reality field It is shown in the present frame of scape, completes depth of field change process.
In the present embodiment, the dynamic depth variation effect of the 1st frame to the 11st frame is as shown in Fig. 5~Fig. 8, by Fig. 5 to Fig. 8 Change in chronological order, Fig. 5~Fig. 8 is the dynamic depth variation effect of the 1st, 4,7,11 frames respectively.
Using the method in the present invention, embodiment is when eye-observation three-dimensional basketball scene as Fig. 3, if by focus By moving to nearest basketball at a distance, then Fig. 4 can be obtained in this way to defocused depth of field state diagram.
In figure as it can be seen that when eyeball stare position from the basketball at rear be transferred to front basketball when, dynamic depth technology Focus can be gradually transferred to the basketball in front from the basketball at rear with time and distance, to simulate the focusing effect of human eye Fruit.
It is that embodiment is merely illustrative of the technical solution of the present invention rather than is limited above, without departing substantially from the present invention In the case where spirit and its essence, those skilled in the art make various corresponding changes and change in accordance with the present invention Shape, but these corresponding changes and modifications all should fall within the scope of protection of the appended claims of the present invention.

Claims (9)

1. a kind of virtual dynamic depth variable processing method of simulation human eye focusing, it is characterised in that comprising steps of
1) it obtains eyes eyeball and stares position in virtual reality scenario;
2) according to the position of staring of eyes eyeball, processing is carried out to the image of frame each in virtual reality scenario in real time and is obtained currently The expectation focal distance Depth between object and virtual lens is virtually stared under framedesired
The step 2) is specifically for the virtual reality scenario image of each frame:
2.1) location information of staring of eyes eyeball is transformed into the two-dimensional coordinate that present frame shows picture, then two-dimensional coordinate is turned Change the space three-dimensional landing point coordinates Pos of present frame virtual reality scenario into;
2.2) it is found in present frame virtual reality scenario and virtually stares object corresponding to space three-dimensional landing point coordinates Pos, obtained The depth distance Depth between object and virtual lens is virtually stared under to present framecurrent
2.3) pass through depth distance DepthcurrentAnd previous frame virtually stares the expectation focal distance between object and virtual lens Depth′desiredIt is calculated, acquires the first heavy focusing time Time1refocus
If 2.4) Time1refocusIt is less thanMillisecond, then will virtually stare between object and virtual lens under present frame It is expected that focal distance DepthdesiredAnd previous frame virtually stares the expectation focal distance between object and virtual lens Depth′desiredValue be set as and depth distance DepthcurrentIt is identical, depth of field change process is not carried out to present frame, and terminate Processing to present frame;
If Time1refocusIt is more than or equal toMillisecond, then continue next step;
It 2.5) whether is that display progress is carried out after being superimposed by blur effect by the display of previous frame virtual reality scenario image Judgement processing obtains and virtually stares the expectation focal distance Depth between object and virtual lens under present framedesiredOr not Depth of field change process is carried out to present frame;
3) further according to desired focal distance DepthdesiredIt is calculated with the internal reference parameter of virtual lens and obtains Dispersive spot radius, then Blur effect is constructed according to Dispersive spot radius, is shown in the present frame for former virtual reality scenario that blur effect is added to, Complete depth of field change process.
2. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 1, feature exist In: the step 1) specifically: Eye-tracking devices are set in the front of human eye, are obtained in real time by Eye-tracking devices double Eye eyeball stares location information.
3. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 1, feature exist In: the step 1) specifically: the position of gaze of the eyes eyeball is set to the center's point of screen of display screen.
4. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 1, feature exist In: the judgement processing of the step 2.5) specifically:
A it) if the display of previous frame virtual reality scenario image is shown after being superimposed by blur effect, focuses again according to first Time Time1refocusThe parameter in virtual reality scenario is calculated, depth of field change process is not carried out to present frame, and terminate to working as The processing of previous frame;
B) if the display of previous frame virtual reality scenario image is shown after being superimposed by blur effect, calculating acquires the Double focusing time Time2refocus, then to the second heavy focusing time Time2refocusJudged:
If the second heavy focusing time Time2refocusIt is less thanMillisecond, then according to the second heavy focusing time Time2refocus The parameter in virtual reality scenario is calculated, depth of field change process is not carried out to present frame, and terminate the processing to present frame;
If the second heavy focusing time Time2refocusIt is more than or equal toMillisecond, then will virtually stare the end depth of object Distance DepthendValue be set as and depth distance DepthcurrentIt is identical, and acquired using following formula calculating Depthdesired
Depthdesired=(Timenow-Timestart)×Δd+Depthstart
Wherein, TimenowAt the time of indicating present frame place, TimestartIndicate focusing time started, DepthstartIndicate virtual The beginning depth distance of object is stared, Δ d indicates unit depth distance.
5. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 4, feature exist In: the step 2.3) first is focusing time Time1 heavyrefocusCalculating use following formula:
Wherein, Δ t indicates the attainable minimum focusing time of human eye physiological structure.
6. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 5, feature exist In: the step B) the second heavy focusing time Time2refocusCalculating use following formula:
Wherein, Δ t indicates the attainable minimum focusing time of human eye physiological structure.
7. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 4, feature exist In: the parameter in the virtual reality scenario includes focusing time started Timestart, virtually stare the beginning depth of object away from From Depthstart, virtually stare the end depth distance Depth of objectendWith the parameter of unit depth distance, delta d.
8. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 7, feature exist In: the focusing time started Timestart, virtually stare the beginning depth distance Depth of objectstart, virtually stare object End depth distance DepthendIt is specifically to be calculated using the following equation acquisition with unit depth distance, delta d:
Timestart=Timenow
Depthstart=Depth 'desired
Depthend=Depthcurrent
Wherein, TimenowAt the time of indicating present frame place, TimerefocusAttach most importance to focusing time.
9. a kind of virtual dynamic depth variable processing method of simulation human eye focusing according to claim 1, feature exist In: described virtually stares the three dimensional virtual models that object is virtual reality scenario.
CN201610671842.9A 2016-08-16 2016-08-16 A kind of virtual dynamic depth variable processing method of simulation human eye focusing Expired - Fee Related CN106249413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610671842.9A CN106249413B (en) 2016-08-16 2016-08-16 A kind of virtual dynamic depth variable processing method of simulation human eye focusing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610671842.9A CN106249413B (en) 2016-08-16 2016-08-16 A kind of virtual dynamic depth variable processing method of simulation human eye focusing

Publications (2)

Publication Number Publication Date
CN106249413A CN106249413A (en) 2016-12-21
CN106249413B true CN106249413B (en) 2019-04-23

Family

ID=57593409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610671842.9A Expired - Fee Related CN106249413B (en) 2016-08-16 2016-08-16 A kind of virtual dynamic depth variable processing method of simulation human eye focusing

Country Status (1)

Country Link
CN (1) CN106249413B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272200A (en) * 2017-05-02 2017-10-20 北京奇艺世纪科技有限公司 A kind of focal distance control apparatus, method and VR glasses
CN107392888B (en) * 2017-06-16 2020-07-14 福建天晴数码有限公司 Distance testing method and system based on Unity engine
CN108648223A (en) * 2018-05-17 2018-10-12 苏州科技大学 Scene reconstruction method based on median eye and reconfiguration system
CN111243049B (en) 2020-01-06 2021-04-02 北京字节跳动网络技术有限公司 Face image processing method and device, readable medium and electronic equipment
CN113989471A (en) * 2021-12-27 2022-01-28 广州易道智慧信息科技有限公司 Virtual lens manufacturing method and system in virtual machine vision system
CN115278084A (en) * 2022-07-29 2022-11-01 维沃移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587542A (en) * 2009-06-26 2009-11-25 上海大学 Field depth blending strengthening display method and system based on eye movement tracking
JP2015118287A (en) * 2013-12-18 2015-06-25 株式会社デンソー Face image capturing device and driver state determination device
CN104813217A (en) * 2012-10-17 2015-07-29 国家航空航天研究办公室 Method for designing a passive single-channel imager capable of estimating depth of field

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587542A (en) * 2009-06-26 2009-11-25 上海大学 Field depth blending strengthening display method and system based on eye movement tracking
CN104813217A (en) * 2012-10-17 2015-07-29 国家航空航天研究办公室 Method for designing a passive single-channel imager capable of estimating depth of field
JP2015118287A (en) * 2013-12-18 2015-06-25 株式会社デンソー Face image capturing device and driver state determination device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
景深效果绘制技术综述;吴佳泽等;《中国图象图形学报》;20111130;1957-1965页

Also Published As

Publication number Publication date
CN106249413A (en) 2016-12-21

Similar Documents

Publication Publication Date Title
CN106249413B (en) A kind of virtual dynamic depth variable processing method of simulation human eye focusing
US11287884B2 (en) Eye tracking to adjust region-of-interest (ROI) for compressing images for transmission
US10720128B2 (en) Real-time user adaptive foveated rendering
US10739849B2 (en) Selective peripheral vision filtering in a foveated rendering system
US10372205B2 (en) Reducing rendering computation and power consumption by detecting saccades and blinks
CN107317987B (en) Virtual reality display data compression method, device and system
JP6276691B2 (en) Simulation device, simulation system, simulation method, and simulation program
JP2021518679A (en) Depth-based foveal rendering for display systems
CN106598252A (en) Image display adjustment method and apparatus, storage medium and electronic device
CN107272200A (en) A kind of focal distance control apparatus, method and VR glasses
JPWO2013027755A1 (en) Spectacle wearing simulation method, program, apparatus, spectacle lens ordering system, and spectacle lens manufacturing method
CN110378914A (en) Rendering method and device, system, display equipment based on blinkpunkt information
US11380072B2 (en) Neutral avatars
JP2023515517A (en) Fitting eyeglass frames including live fitting
CN111880654A (en) Image display method and device, wearable device and storage medium
EP3745944B1 (en) Image adjustment for an eye tracking system
CN106200908B (en) A kind of control method and electronic equipment
CN108143596A (en) A kind of wear-type vision training instrument, system and training method
KR102641916B1 (en) Devices and methods for evaluating the performance of visual equipment for visual tasks
WO2019041353A1 (en) Wearable display device-based method for measuring binocular brightness sensitivity, device and mobile terminal
CN106851249A (en) Image processing method and display device
JP2022176110A (en) Light field near-eye display device and light field near-eye display method
JP6996450B2 (en) Image processing equipment, image processing methods, and programs
JP3735842B2 (en) Computer-readable recording medium storing a program for driving an eye optical system simulation apparatus
CN116225219A (en) Eyeball tracking method based on multi-combination binocular stereoscopic vision and related device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Xinjie

Inventor after: Cai Zhigang

Inventor before: Wang Xinjie

Inventor before: Luo Hao

Inventor before: Cai Zhigang

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190423

Termination date: 20210816

CF01 Termination of patent right due to non-payment of annual fee