CN106199964A - Binocular AR helmet and the depth of field control method of the depth of field can be automatically adjusted - Google Patents
Binocular AR helmet and the depth of field control method of the depth of field can be automatically adjusted Download PDFInfo
- Publication number
- CN106199964A CN106199964A CN201510487699.3A CN201510487699A CN106199964A CN 106199964 A CN106199964 A CN 106199964A CN 201510487699 A CN201510487699 A CN 201510487699A CN 106199964 A CN106199964 A CN 106199964A
- Authority
- CN
- China
- Prior art keywords
- distance
- human eye
- mapping relations
- information
- dis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A kind of binocular AR helmet that can be automatically adjusted the depth of field and depth of field control method, wherein, the method includes: obtain object distance dis to human eye;Distance dis according to object to human eye and predeterminable range mapping relations δ, obtain the left and right two group central point that effectively show information corresponding with distance dis of object to human eye to coordinate data, wherein, predeterminable range mapping relations δ represent that central point is to the mapping relations between coordinate data and object to distance dis of human eye;According to central point to coordinate data, the information source image of virtual information that will need to show, it is respectively displayed on left images display source.The method is capable of, by near virtual information accurate superposition to human eye fixation point position, making virtual information merge with environment high, it is achieved enhancing virtual reality truly.
Description
Technical field
The present invention relates to wear field of display devices, particularly relate to a kind of binocular AR helmet that can be automatically adjusted the depth of field and
Its depth of field control method.
Background technology
Along with the rise of wearable device, various display devices of wearing become the research and development focus of Ge great giant company, wear display and set
The standby visual field also progressing into people.Wear display device be augmented reality (Augmented Reality Technique,
Referred to as AR) optimal operation environment, virtual information can be presented in true environment by it by helmet window.
But, most existing AR wear display device and only consider and target location X, Y-axis for the superposition of AR information
The dependency of coordinate, and do not consider the depth information of target, the most also allow for virtual information and simply swim in human eye front,
And the highest with environment degrees of fusion, cause AR to wear the user experience of display device not good enough.
In the prior art, there is also the method regulating the depth of field on helmet, but these methods are the most all employing machines
The mode of tool regulation regulates the optical texture of optical lens group, thus changes optical component image distance, and then realizes the virtual image depth of field
Regulation.And in this depth of field regulative mode can cause helmet volume is big, cost is high and precision such as is difficult to control at the problem.
Summary of the invention
The technical problem to be solved be in order to.For solving the problems referred to above, first one embodiment of the present of invention provides
A kind of depth of field control method of binocular AR helmet, described method includes:
Obtain object distance dis to human eye;
Distance dis according to object to human eye and predeterminable range mapping relations δ, obtain and distance dis pair of object to human eye
Two groups, the left and right the answered central point effectively showing information is to coordinate data, and wherein, described predeterminable range mapping relations δ represent described
Central point is to the mapping relations between coordinate data and object to distance dis of human eye;
According to described central point to coordinate data, the information source image of virtual information that will need to show, it is respectively displayed on left images
On display source.
According to one embodiment of present invention, object distance dis to human eye is obtained by Binocular Stereo Vision System.
According to one embodiment of present invention, described object distance dis to human eye is determined according to following expression:
Wherein, h represents that Binocular Stereo Vision System represents between object and Binocular Stereo Vision System away from human eye distance, Z
Distance, Z represents that baseline distance, f represent focal length, xlAnd xrRepresent object x coordinate in left image and right image respectively.
According to one embodiment of present invention, detect human eye fixation object thing time space sight line information data by sight line tracking system,
And determine object distance dis to human eye according to described space line-of-sight information data.
According to one embodiment of present invention, described object distance dis to human eye is determined according to following expression:
Wherein, (Lx,Ly,Lz) and (Lα,Lβ,Lγ) represent coordinate and the deflection of impact point on left view line vector respectively,
(Rx,Ry,Rz) and (Rα,Rβ,Rγ) represent coordinate and the deflection of impact point on right line of sight respectively.
According to one embodiment of present invention, object distance dis to human eye is determined by video camera imaging ratio.
According to one embodiment of present invention, object distance dis to human eye is determined by depth-of-field video camera.
According to one embodiment of present invention, in the process, by central point to position centered by coordinate, the void that will need to show
The information source image of plan information, is respectively displayed on left images display source.
According to one embodiment of present invention, in the process, centered by the off center point position to coordinate pre-configured orientation
Position, the information source image of virtual information that will need to show, it is respectively displayed on left images display source.
According to one embodiment of present invention, described method also includes: when user uses described helmet for the first time and/or
When described user uses described helmet every time, revise described predeterminable range mapping relations δ.
According to one embodiment of present invention, the step revising described predeterminable range mapping relations δ includes:
The image controlling helmet shows that source, by presupposed information source images, is respectively displayed on left images display source;
Obtain and observing presupposed information source images human eye when human eye front overlaps of display on left images display source
Sight line space vector, and obtain the first distance according to described space line-of-sight vector;
According to described presupposed information source images coordinate data on described left images display source, utilize predeterminable range mapping relations
δ obtains second distance;
According to described first distance and second distance, determine modifying factor;
Utilize described modifying factor that described predeterminable range mapping relations δ are modified.
According to one embodiment of present invention, described predeterminable range mapping relations δ are expressed as:
Wherein, dis represents the object distance to human eye, and h represents matched curve function, (SL, SR) represent about two groups effective
The coordinate data of the central point pair of display information.
According to one embodiment of present invention, in the process, build described predeterminable range mapping relations δ to include:
Step one, described left images display source predetermined position show preset test image;
Step 2, obtain sight line space vector when user watches virtual test figure attentively, according to described sight line space vector and described pre-
If the display position of test image, determine and described in one group, preset test picture position and the distance with corresponding object distance human eye
Mapping relations data;
Step 3, reduce the centre distance of described default test image successively by default rule, and repeat step 2, until obtaining k
Organize described default test picture position and the mapping relations data of the distance with corresponding object distance human eye;
Step 4, default test picture position described to described k group and the mapping with the distance of corresponding object distance human eye are closed
Coefficient, according to being fitted, builds and obtains described predeterminable range mapping relations δ.
Present invention also offers a kind of binocular AR helmet that can be automatically adjusted the depth of field, comprising:
Optical system;
Image display source, it includes left image display source and right image display source;
Range data acquisition module, it is for obtaining the data relevant with distance dis of object to human eye;
Data processing module, it is connected with described range data acquisition module, its for according to described with object to human eye away from
Determine object distance dis to human eye from the data that dis is relevant, and combine predeterminable range mapping relations δ, determine and object
To two groups, the left and right central point effectively showing information corresponding to distance dis of human eye to coordinate data, and according to described central point to seat
Mark data, the information source image of virtual information that will need to show, it is respectively displayed on left images display source;
Wherein, described predeterminable range mapping relations δ represent described central point to distance dis of coordinate data and object to human eye it
Between mapping relations.
According to one embodiment of present invention, described range data acquisition module includes any one in item set forth below:
Single camera, Binocular Stereo Vision System, depth-of-field video camera and sight line tracking system.
According to one embodiment of present invention, described data processing module is configured to the off center point position to coordinate pre-configured orientation
It is set to center, the information source image of virtual information that will need to show, it is respectively displayed on left images display source.
According to one embodiment of present invention, described data processing module is configured to by central point position centered by coordinate, need to
The information source image of the virtual information of display, is respectively displayed on left images display source.
According to one embodiment of present invention, described binocular AR helmet is also when user uses described helmet for the first time
And/or when described user uses described helmet every time, revise described predeterminable range mapping relations δ.
According to one embodiment of present invention, described predeterminable range mapping relations δ are expressed as:
Wherein, dis represents the object distance to human eye, and h represents matched curve function, (SL, SR) represent about two groups effective
The coordinate data of the central point pair of display information.
Binocular AR helmet provided by the present invention and depth of field control method thereof are capable of arriving virtual information accurate superposition
Near human eye fixation point position, virtual information is made to merge with environment high, it is achieved enhancing virtual reality truly.
The present invention program is simple, preset on the premise of mapping relations δ in helmet, it is only necessary to obtain object to people
The distance of eye.And object is various to the acquisition mode of the distance of human eye, can the depth of field can image first-class by binocular range finding
Equipment or method realize, and hardware technology is ripe, reliability height and low cost.
Tradition depth of field regulation is all to start with from changing optical component image distance, and the present invention breaks traditions thinking, does not change optics
Structure, is effectively shown that by two groups, left and right on regulation image display source the equivalent center distance of information realizes the regulation depth of field, has
Initiative, and compare change optical focal length, have more practicality.
Other features and advantages of the present invention will illustrate in the following description, and, partly become aobvious from description
And be clear to, or understand by implementing the present invention.The purpose of the present invention and other advantages can be by wanting in description, right
The structure asking specifically noted in book and accompanying drawing realizes and obtains.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing skill
In art description, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only the present invention
Some embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to
Other accompanying drawing is obtained according to these accompanying drawings:
Fig. 1 is human eye space line-of-sight path schematic diagram;
Fig. 2 is the depth of field control method schematic flow sheet of the binocular AR helmet of one embodiment of the invention;
Fig. 3 is camera imaging schematic diagram;
Fig. 4 is equivalent symmetrical axle OS and the equivalence of two groups of optical systems of left and right two parts image source of one embodiment of the invention
Axis of symmetry OA schematic diagram;
Fig. 5 is that test figure when demarcating distance mapping relations δ of one embodiment of the invention is illustrated;
Fig. 6 is the gradual change schematic diagram of test figure during demarcation distance mapping relations δ of one embodiment of the invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clearly and completely
Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments.Based on this
Embodiment in invention, the every other reality that those of ordinary skill in the art are obtained under not making creative work premise
Execute example, broadly fall into the scope of protection of the invention.
When human eye watches (including left eye OL and right eye OR) object in different spaces region attentively, left eye OL and right eye OR
Line of sight be different.Fig. 1 shows eye space sight line path schematic diagram.In FIG, A, B, C and D
Represent the object of different azimuth in space respectively, when eye-observation or when watching wherein some object attentively, regarding of right and left eyes
The space that line direction is respectively the representative of corresponding line segment is appropriate.
Such as, as human eye fixation object thing A, the direction of visual lines of left eye OL and right eye OR is respectively line segment OLA and line
Section space vector representated by ORA;As human eye fixation object thing B, the direction of visual lines of left eye OL and right eye OR is respectively
For the space vector representated by line segment OLB and line segment ORB.A certain object (such as object A) is watched attentively when having known
Time right and left eyes sight line space vector after, the distance between this object and human eye can be calculated according to sight line space vector.
As a certain object of people's eye fixation (such as object A), the left and right line of sight of human eye in user coordinate system
Middle left view line vector L can be expressed as (Lx,Ly,Lz,Lα,Lβ,Lγ), wherein (Lx,Ly,Lz) it is a little sitting on left view line vector
Mark, (Lα,Lβ,Lγ) it is the deflection of left view line vector;In like manner, right line of sight R can be expressed as
(Rx,Ry,Rz,Rα,Rβ,Rγ)。
According to space analysis method, utilize the left and right sight line of human eye can solve in right amount and obtain point of fixation (such as object A)
Vertical dimension dis away from user:
In augmented reality helmet field, by binocular helmet, the right and left eyes of wearer can observe left and right respectively
Two width virtual images.On the left of left eye is observed sight line and the right eye of virtual image observe on the right side of the sight line of virtual image in space region
When territory is converged mutually, the binocular vision of wearer to will be that a width is overlapping and virtual screen away from wearer's certain distance.This is empty
The distance intending picture distance human eye is that the sight line space vector being made up of with right and left eyes respectively left and right virtual image determines.Work as void
When the distance of plan picture distance human eye is equal to target vertical dimension dis away from user, virtual screen just has consistent with object
Locus.
The sight line space vector that right and left eyes is constituted is that the object watched by it is determined, and on binocular helmet, left and right
Two groups effectively show that the central point of information may decide that again the sight line space vector that user's right and left eyes is constituted, therefore binocular to coordinate
The projector distance L of the virtual image in helmetnWith the two groups of central points effectively showing information in left and right in helmet image source to coordinate
There is corresponding relation, when distance L by virtual screen distance human eyenDuring equal to object vertical dimension dis away from user, should
Corresponding relation can be exchanged into distance mapping relations δ.That is, distance mapping relations δ represent left and right two on helmet image display source
The central point of group effectively display information to coordinate (it can be appreciated that pixel to) on image display source with object to people
Mapping relations between distance dis of eye.
It is pointed out that in different embodiments of the invention, distance mapping relations δ can both be a formula, it is possible to
Think discrete data corresponding relation, the invention is not restricted to this.
It may also be noted that in different embodiments of the invention, distance mapping relations δ can pass through multiple different side
Formula obtains (such as to be determined distance mapping relations δ by modes such as off-line calibrations, and the distance obtained is reflected before dispatching from the factory
The relation δ of penetrating is stored in helmet), the present invention is similarly not so limited to.
Fig. 2 shows the schematic flow sheet of the depth of field control method of the binocular AR helmet that the present embodiment provided.
The depth of field control method of the binocular AR helmet that the present embodiment is provided, in step s 201, uses head user
When wearing certain object that equipment is watched in external environment, obtain this object distance dis to human eye.
In the present embodiment, helmet obtains the object distance to human eye by Binocular Stereo Vision System in step s 201
dis.Binocular Stereo Vision System mainly utilizes principle of parallax to find range.Specifically, Binocular Stereo Vision System can root
Object distance dis away from human eye is determined according to following expression:
Wherein, h represents that Binocular Stereo Vision System represents between object and Binocular Stereo Vision System away from human eye distance, Z
Distance, Z represents that baseline distance, f represent Binocular Stereo Vision System focal length, xlAnd xrRepresent respectively object at left image and
X coordinate in right image.
It should be noted that in different embodiments of the invention, Binocular Stereo Vision System can use different concrete devices
Part realizes, and the invention is not restricted to this.Such as in different embodiments of the invention, Binocular Stereo Vision System both can be
The video camera that two focal lengths are identical, it is also possible to be the video camera of a motion, or it is other rational devices.
It is also desirable to explanation, in other embodiments of the invention, helmet can also use other rational
Method obtains object distance dis to human eye, and the present invention is similarly not so limited to.Such as in different embodiments of the invention,
Helmet both can obtain object distance dis to human eye by depth-of-field video camera, it is also possible to is examined by sight line tracking system
Survey human eye fixation object thing time space sight line information data and determine object distance dis to human eye according to this information data,
Object distance dis to human eye can also be determined by video camera imaging ratio.
When helmet is by distance dis of depth-of-field video camera acquisition object to human eye, helmet can be according to such as following table
Reach formula and be calculated depth of field △ L:
Wherein, △ L1With △ L2Representing the front depth of field and the rear depth of field respectively, δ represents permission blur circle diameter, and f represents lens focus,
F represents that f-number, L represent focal distance.Now, depth of field △ L is object distance dis to human eye.
Space line-of-sight information data when helmet is by sight line tracking system detection human eye fixation object thing calculates target
When thing is to distance dis of human eye, helmet can use content that Fig. 1 and expression formula (1) illustrated to determine target
Thing, to distance dis of human eye, does not repeats them here.
When helmet is by distance dis of video camera imaging ratio calculating object to human eye, need in advance by object
Actual size is put in storage, then uses the image that video camera shooting comprises object, and calculates object picture in shooting image
Element size;Obtain the actual size of object warehouse-in subsequently to database retrieval with shooting image;Finally by shooting picture size
Object distance dis to human eye is calculated with actual size.
Fig. 3 shows camera imaging schematic diagram, and wherein, AB represents that thing, A'B' represent that picture, note object distance OB are u, as
It is v away from OB', then can be obtained by triangle similarity relation:
Can be obtained by expression formula (6):
Wherein, x represents that thing is long, and y represents as long.
When photographic head focal length is fixed, object distance can be calculated according to expression formula (7).In this embodiment, object is to people
The distance of eye is object distance u, and the actual size of target object is the long x of thing, and the Pixel Dimensions of object is as long y.Picture
Determining by photographic head internal optics structure away from v, after photographic head optical texture determines, image distance v is definite value.
Again as in figure 2 it is shown, obtain object to after distance dis of human eye, according to object to human eye in step S202
Distance dis, utilize predeterminable range mapping relations δ i.e. can determine that left and right two groups of central points effectively showing information to number of coordinates
According to.In the present embodiment, predeterminable range mapping relations δ are preset in helmet, and it both can be a formula, also
It can be discrete data corresponding relation.
Specifically, in the present embodiment, distance mapping relations δ can use following expression to be indicated:
Wherein, dis represents the object distance to human eye, and (SL, SR) represents the coordinate of the effectively central point pair of display information, h
Represent that object is to the matched curve function between distance dis of human eye and the coordinate of the central point pair effectively showing information.
It should be noted that in other embodiments of the invention, distance mapping relations δ are also denoted as other reasonable shapes
Formula, the invention is not restricted to this.
Obtain left and right two groups effectively show that the central point of information is to coordinate data after, in step S203, have with about this two groups
The central point of effect display information is reference position to coordinate data, the information source image of virtual information that will need to show, respectively a left side
Right display is on image display source.
In the present embodiment, with central point for reference position, coordinate is referred to that the pixel with correspondence is effective display information to coordinate
Center, the information source image of virtual information that will need to show, respectively about show on image display source.Now make
User just can see virtual information by helmet on object position.
It should be noted that in other embodiments of the invention, it is also possible to be reference position according to central point to coordinate, with
Other rational methods show the information source image of virtual information, the invention is not restricted to this.Such as in an embodiment of invention
In, using central point coordinate referred to for reference position by have with the coordinate of corresponding pixel pair the position of certain deviation as
Effectively showing the center of information, the information source image of virtual information that will need to show, a point left and right display shows source at image
On.Now, user just can see virtual information on object side by helmet.
In this embodiment, virtual information can be shown on object side, in order to avoid blocking by arranging certain side-play amount
Object, more meets user habit.
It is pointed out that in this embodiment, skew time about virtual information information source image preferably need keep
Simultaneous bias, i.e. left and right information source picture centre spacing keep constant with relative position, only its position on image display source
Change.
In the present embodiment, it is internal that distance mapping relations δ are preset at helmet, and it can be tested by off-line calibration
Arrive.Usually, after distance mapping relations δ are tested by producer, it is stored in helmet before dispatching from the factory.Distance mapping relations δ
Relevant to the structure of helmet, after structure is fixing, distance mapping relations δ almost the most just secure.
But, for different users, wearing error needs certain correction factor to be modified.In order to more fully disclose this
Scheme of the invention, is exemplified below a kind of scaling method of distance mapping relations δ, it should be pointed out that this place is only for example,
Do not limit scaling method and be only this one.
Distance mapping relations δ can gather Q test user data by sight line tracking system, and each test user observes k
Organize test figure and obtain.Wherein, Q is positive integer, it should be pointed out that in case of need, and the value of Q can be 1.
Assume that the resolution of image display left and right, source two parts display source region of helmet is N*M, i.e. horizontal and vertical
Resolution is M and N respectively, as shown in Figure 4, and the equivalent symmetrical axle OS of left and right two parts image source and two groups of optical systems
Equivalent symmetrical axle OA consistent.In the diagram, OL and OR represents left eye and right eye respectively, and D represents interpupillary distance, d0Represent
Two groups of optical system primary optical axis distances.
When determining distance mapping relations δ, k group test user can be obtained by making each test user observe k group test figure
Sight line space vector (i.e. the space line-of-sight information data of user) data, can obtain according to this k group sight line space vector data
The corresponding relation between figure center point coordinate data and sight line space vector data is tested on k group image display source.
Specifically, in the present embodiment, obtain test figure center point coordinate data and sky on k group image display source based on each user
Between the step of corresponding relation between sight line information data include:
After step one, tester wear helmet, image on helmet display source divides left and right display two identical
Test figure.As it is shown in figure 5, as a example by the present embodiment, test figure shown by image display source is spider figure, pitch for two group ten
Word figure L1And L2Centre distance be d1, and the central point of spider figure is symmetrical (in the present embodiment, with virtual graph about OS
As a example by OS symmetry), wherein, pitch word figure L for two group ten1And L2Centre distance d1Less than two groups of optical system primary optical axis away from
From d0。
Step 2, it is projected in, when test user is watched attentively by helmet window, the virtual spider that human eye front overlaps
During figure, sight line tracking system record test user watches sight line space vector during virtual spider figure attentively, thus obtains one group of number
According to.
The distance of this virtual screen distance human eye is that the sight line space vector being made up of with right and left eyes respectively left and right virtual image determines
's.When the distance of virtual screen distance human eye is equal to target vertical dimension dis away from user, virtual screen just with object
There is consistent locus.
In the present embodiment, remember that left and right spider figure that in the 1st group of test figure, image source the shows coordinate in image source coordinate system divides
Wei (SLX1,SLY1) and (SRX1,SRY1).When image source shows this spider figure, gaze tracking system is recorded successively
Current test user watches left side during virtual graph completely overlapped after helmet optical system projects attentively through helmet window
Right eye line of sight coordinate, note test user is look at right and left eyes line of sight coordinate during the 1st group of test figure and is respectively
(ELX1,ELY1) and (ERX1,ERY1).So can obtain one group of image source spider figure position and corresponding right and left eyes regards
The mapping relations of line vector coordinate, it may be assumed that
In the present embodiment, the left and right spider figure scheming only to show by image source by the 1st group of test is in position
{(SLX1,SLY1),(SRX1,SRY1) it is abbreviated as (SL1,SR1), and will now test the right and left eyes line of sight coordinate of user
{(ELX1,ELY1),(ERX1,ERY1) it is abbreviated as (EL1,ER1), then expression formula (8) just can be expressed as:
And expression formula (1) theoretical according to the human eye vision shown in Fig. 1 understands, right and left eyes line of sight can obtain and now note
Viewpoint is to distance L of human eyen_1, the most also can be obtained by image information that user seen by helmet window through wearing
The virtual screen after equipment projection distance L away from usern_1With the centre coordinate of left and right display information in helmet image source
(SL1,SR1) mapping relations, it may be assumed that
Step 3, by presetting the centre distance of the left and right spider figure of display on rule downscaled images display source successively, see Fig. 6,
After reducing this centre distance, repeat step 2 every time.
After so carrying out k operation, k group data can be obtained the most altogether.And often organizing data is spider figure on image display source
Corresponding relation between center point coordinate data and space line-of-sight information data, it may be assumed that
Understand according to the theories of vision shown in Fig. 1 and expression formula (1), utilize above-mentioned k group data, k group can be obtained and use
The image information that person is seen by the helmet window virtual screen after helmet the projects distance away from user with wear
The mapping relations of left and right display information centre distance on equipment drawing image source, it may be assumed that
Q test user is carried out aforesaid operations, k*Q group mapping relations can be obtained the most altogether, it may be assumed that
These k*Q group mapping relations data are carried out data matching and can get on display screen left-right dots to coordinate and eye space sight line
Matched curve function h between data, according to the seat of left-right dots pair on the fit curve equation h obtained and existing display screen
Mark data, can be calculated the virtual projection information required for correspondence away from people by this coordinate data substitution fit curve equation
The distance of eye is shown below, it may be assumed that
Wherein, (SLp,SRp) represent that in helmet image source, the center of the symmetrical information of one pair of which of display is sat
Mark, Ln_pRepresent the virtual screen distance away from human eye.
Expression formula (15) can be reduced to:
Wherein, LnRepresenting the virtual screen distance to human eye, (SL, SR) represents the one pair of which of display in helmet image source
The center position coordinates of symmetrical information.Certainly, center position coordinates (SL, SR) needs in corresponding image display source.
Due to during the use of helmet, for ensureing that virtual screen has the consistent space depth of field with object, therefore
The virtual screen that user is seen by helmet window distance L away from human eyenIt is phase with distance dis of object to human eye
Deng.Therefore expression formula (16) the most just can be equal to:
Owing to each user's sight line is variant, therefore when user uses described helmet for the first time, in order to obtain more preferably
Display effect, the similar method demarcating distance mapping relations δ can be used, mapping relations δ of adjusting the distance are done and are the most simply marked
It is fixed, so that distance mapping relations δ are more adapted to this user.Equally, each user wears when wearing helmet every time
Wearing position and also have little deviation, when wearing every time, it would however also be possible to employ similar method, mapping relations δ of adjusting the distance are repaiied
Just.
Specifically, in the present embodiment, different use states based on different users or user mapping relations δ of adjusting the distance are entered
Row correcting mode is when user wears upper helmet, and helmet starts, and image display source shows symmetrical cross
Fork figure, the sight line space of eyeball when Arithmetic of Eye-tracking System record user watches the overlapping spider figure being projected in human eye front attentively
Vector, helmet according to this data set to virtual projection information distance L away from human eyen_pSource is shown with the image on equipment
Symmetrical pixel is to (SLp,SRp) mapping relations δ do the correction of adaptive user, specifically can be expressed as:
Wherein, w represents modifying factor.
In like manner, expression formula (18) can also wait and be all:
In above-mentioned makeover process, one group of user can be obtained and wear this equipment for the first time and be corrected the display of test system record
The sight line space vector data of the user of symmetrical spider figure coordinate and correspondence on screen, and according to this sight line space vector
Data and expression formula (1) can be calculated the projector distance L of correspondencen_x, the i.e. first distance;
Meanwhile, according to spider figure coordinate symmetrical on now display screen, utilize mapping relations δ that helmet is stored,
Projector distance data L that this spider figure coordinate is corresponding can be obtainedn_y, i.e. second distance.By this second distance Ln_yWith aforementioned
First distance Ln_xCompare, just can obtain a penalty coefficient (i.e. modifying factor) w so that calculate data with
The root-mean-square error of test data is minimum.
If needing the correction adapting to user from mapping relations δ, then go out and need on plant to configure Eye-controlling focus system
System;If need not the correction adapting to user from mapping relations δ, then go out on plant, to configure sight line and chase after
Track system.Eye-controlling focus is the technology utilizing the various detection meanss such as electronics/optics to obtain experimenter's current " direction of gaze ",
It is to utilize Rotation of eyeball phase to some eye structure of invariant position and feature as reference, position variation characteristic and this
Extract sight line running parameter between a little invariant features, then obtain direction of visual lines by geometric model or mapping model.
When user sees external environment by helmet of the present invention, before in aforementioned four, the method for one of method obtains human eye
The Fang Butong depth of field target distance away from user, user can pass through external control (such as Voice command, by key control etc.)
Assigning instruction to helmet, as required to show the information (such as object A) of one of them object, helmet obtains
The object (such as object A) specified by foundation user after the instruction distance away from user will be with object (such as mesh
Mark thing A) the corresponding display on equipment drawing image source of relevant information.That is, according to object (such as object A) away from use
The distance of person, equipment central processing unit can obtain the coordinate (SL of one group of pixel pairp,SRp), the need relevant to this object
Information identical display in left and right on equipment drawing image source of projection, and with (SLp,SRp) or with (SLp,SRp) certain deviation position is
Center.User just can be away from user certain distance, (this distance i.e. object be away from user by helmet window
Distance) place see with specify object relevant information virtual projection.
The present embodiment additionally provides a kind of binocular AR helmet that can be automatically adjusted the depth of field, it include image display source, away from
From data acquisition module and data processing module, data processing module internal memory contains distance mapping relations δ.Wherein, distance is reflected
The relation δ of penetrating represent on helmet image display source two groups, left and right effectively show the central point of information to coordinate with object away from people
Mapping relations between distance dis of eye.
When user sees external environment by helmet, range data acquisition module obtains to be had to human eye distance dis with object
The data closed, and these data are sent to data processing module.In different embodiments of the invention, range data collection
Module can be any one in single camera, Binocular Stereo Vision System, depth-of-field video camera, sight line tracking system.
When range data acquisition module is single camera, range data acquisition module can be come by video camera imaging ratio
Obtain the data relevant with distance dis of object to human eye.When range data acquisition module is Binocular Stereo Vision System,
Range data acquisition module then can utilize the method that principle of parallax is found range, and obtains relevant with distance dis of object to human eye
Data.When range data acquisition module is sight line tracking system, range data acquisition module is according to aforementioned expression (1)
Obtain the data relevant with distance dis of object to human eye.When range data acquisition module is depth-of-field video camera, distance
Data acquisition module can directly acquire the data relevant with distance dis of object to human eye.
The data that data processing module transmits according to range data acquisition module calculate object distance dis to human eye, and according to
Obtain the left and right two group corresponding with distance dis of object to human eye in distance mapping relations δ and effectively show the center of information
Point to coordinate data.Data processing module control image display source, with corresponding point to coordinate data as reference position, need to
The information source image of the virtual information of display, point left and right display is on image display source.
It should be noted that in different embodiments of the invention, data processing module controls image display source with corresponding point
It is reference position to coordinate to show that the information source image of virtual information both can be to position centered by coordinate by corresponding point,
The information source image of the virtual information that need to show divide left and right display on image display source, it is also possible to be to coordinate at central point
The information source image of virtual information that certain deviation position will need to show, point left and right display on image display source, the present invention
It is not limited to this.
Helmet acquisition and the principle of corrected range mapping relations δ and process have been carried out in the foregoing description in detail
Ground illustrates, does not repeats them here.It should be noted that in other embodiments of the invention, distance mapping relations δ also may be used
To be obtained by other rational methods or to be modified, the present invention is similarly not so limited to.
It can be seen that binocular AR helmet provided by the present invention and depth of field control method thereof can be real from foregoing description
Now by near virtual information accurate superposition to human eye fixation point position, virtual information is made to merge with environment high, it is achieved really to anticipate
Enhancing virtual reality in justice.
The present invention program is simple, preset on the premise of mapping relations δ in helmet, it is only necessary to obtain object to people
The distance of eye.And object is various to the acquisition mode of the distance of human eye, can the depth of field can image first-class by binocular range finding
Equipment or method realize, and hardware technology is ripe, reliability height and low cost.
Tradition depth of field regulation is all to start with from changing optical component image distance, and the present invention breaks traditions thinking, does not change optics
Structure, is effectively shown that by two groups, left and right on regulation image display source the equivalent center distance of information realizes the regulation depth of field, has
Initiative, and compare change optical focal length, have more practicality.
All features disclosed in this specification, or disclosed all methods or during step, except mutually exclusive spy
Levy and/or beyond step, all can combine by any way.
Any feature disclosed in this specification (including any accessory claim, summary and accompanying drawing), unless specifically stated otherwise,
All can be by other equivalences or there is the alternative features of similar purpose replaced.I.e., unless specifically stated otherwise, each feature is simply
An example in a series of equivalences or similar characteristics.
The invention is not limited in aforesaid detailed description of the invention.The present invention expands to any new spy disclosed in this manual
Levy or any new combination, and the arbitrary new method that discloses or the step of process or any new combination.
Claims (19)
1. the depth of field control method of a binocular AR helmet, it is characterised in that described method includes:
Obtain object distance dis to human eye;
Distance dis according to object to human eye and predeterminable range mapping relations δ, obtain and distance dis pair of object to human eye
Two groups, the left and right the answered central point effectively showing information is to coordinate data, and wherein, described predeterminable range mapping relations δ represent described
Central point is to the mapping relations between coordinate data and object to distance dis of human eye;
According to described central point to coordinate data, the information source image of virtual information that will need to show, it is respectively displayed on left images
On display source.
2. the method for claim 1, it is characterised in that obtain object to human eye by Binocular Stereo Vision System
Distance dis.
3. method as claimed in claim 2, it is characterised in that according to following expression determine described object to human eye away from
From dis:
Wherein, h represents that Binocular Stereo Vision System represents between object and Binocular Stereo Vision System away from human eye distance, Z
Distance, Z represents that baseline distance, f represent focal length, xlAnd xrRepresent object x coordinate in left image and right image respectively.
4. the method for claim 1, it is characterised in that detect human eye fixation object thing space-time by sight line tracking system
Between sight line information data, and determine object distance dis to human eye according to described space line-of-sight information data.
5. method as claimed in claim 4, it is characterised in that according to following expression determine described object to human eye away from
From dis:
Wherein, (Lx,Ly,Lz) and (Lα,Lβ,Lγ) represent coordinate and the deflection of impact point on left view line vector respectively,
(Rx,Ry,Rz) and (Rα,Rβ,Rγ) represent coordinate and the deflection of impact point on right line of sight respectively.
6. the method for claim 1, it is characterised in that determine that object is to human eye by video camera imaging ratio
Distance dis.
7. the method for claim 1, it is characterised in that determine the object distance to human eye by depth-of-field video camera
dis。
8. the method for claim 1, it is characterised in that in the process, by central point to position centered by coordinate,
The information source image of virtual information that will need to show, is respectively displayed on left images display source.
9. the method for claim 1, it is characterised in that in the process, presets side with off center point to coordinate
Position centered by the position of position, the information source image of virtual information that will need to show, it is respectively displayed on left images display source.
10. the method for claim 1, it is characterised in that described method also includes: when user uses described head for the first time
When wearing equipment and/or when described user uses described helmet every time, revise described predeterminable range mapping relations δ.
11. the method for claim 1, it is characterised in that the step revising described predeterminable range mapping relations δ includes:
The image controlling helmet shows that source, by presupposed information source images, is respectively displayed on left images display source;
Obtain and observing presupposed information source images human eye when human eye front overlaps of display on left images display source
Sight line space vector, and obtain the first distance according to described space line-of-sight vector;
According to described presupposed information source images coordinate data on described left images display source, utilize predeterminable range mapping relations
δ obtains second distance;
According to described first distance and second distance, determine modifying factor;
Utilize described modifying factor that described predeterminable range mapping relations δ are modified.
12. the method for claim 1, it is characterised in that described predeterminable range mapping relations δ are expressed as:
Wherein, dis represents the object distance to human eye, and h represents matched curve function, (SL, SR) represent about two groups effective
The coordinate data of the central point pair of display information.
13. the method for claim 1, it is characterised in that in the process, build described predeterminable range mapping relations
δ includes:
Step one, described left images display source predetermined position show preset test image;
Step 2, obtain sight line space vector when user watches virtual test figure attentively, according to described sight line space vector and described pre-
If the display position of test image, determine and described in one group, preset test picture position and the distance with corresponding object distance human eye
Mapping relations data;
Step 3, reduce the centre distance of described default test image successively by default rule, and repeat step 2, until obtaining k
Organize described default test picture position and the mapping relations data of the distance with corresponding object distance human eye;
Step 4, default test picture position described to described k group and the mapping with the distance of corresponding object distance human eye are closed
Coefficient, according to being fitted, builds and obtains described predeterminable range mapping relations δ.
14. 1 kinds of binocular AR helmets that can be automatically adjusted the depth of field, it is characterised in that comprising:
Optical system;
Image display source, it includes left image display source and right image display source;
Range data acquisition module, it is for obtaining the data relevant with distance dis of object to human eye;
Data processing module, it is connected with described range data acquisition module, its for according to described with object to human eye away from
Determine object distance dis to human eye from the data that dis is relevant, and combine predeterminable range mapping relations δ, determine and object
To two groups, the left and right central point effectively showing information corresponding to distance dis of human eye to coordinate data, and according to described central point to seat
Mark data, the information source image of virtual information that will need to show, it is respectively displayed on left images display source;
Wherein, described predeterminable range mapping relations δ represent described central point to distance dis of coordinate data and object to human eye it
Between mapping relations.
15. binocular AR helmets as claimed in claim 14, it is characterised in that described range data acquisition module include with
Lower listd in any one:
Single camera, Binocular Stereo Vision System, depth-of-field video camera and sight line tracking system.
16. binocular AR helmets as claimed in claim 14, it is characterised in that described data processing module is configured to partially
Move central point to position centered by the position of coordinate pre-configured orientation, the information source image of virtual information that will need to show, show respectively
On left images display source.
17. binocular AR helmets as claimed in claim 14, it is characterised in that described data processing module be configured to in
Heart point to position centered by coordinate, the information source image of virtual information that will need to show, be respectively displayed on left images display source.
18. binocular AR helmets as claimed in claim 14, it is characterised in that described binocular AR helmet also makes
When user uses described helmet for the first time and/or when described user uses described helmet every time, revise described presetting
Distance mapping relations δ.
19. binocular AR helmets as claimed in claim 14, it is characterised in that described predeterminable range mapping relations δ represent
For:
Wherein, dis represents the object distance to human eye, and h represents matched curve function, (SL, SR) represent about two groups effective
The coordinate data of the central point pair of display information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2015100298797 | 2015-01-21 | ||
CN201510029879 | 2015-01-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106199964A true CN106199964A (en) | 2016-12-07 |
CN106199964B CN106199964B (en) | 2019-06-21 |
Family
ID=56416370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510487699.3A Active CN106199964B (en) | 2015-01-21 | 2015-08-07 | The binocular AR helmet and depth of field adjusting method of the depth of field can be automatically adjusted |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106199964B (en) |
WO (1) | WO2016115874A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107116555A (en) * | 2017-05-27 | 2017-09-01 | 芜湖星途机器人科技有限公司 | Robot guiding movement system based on wireless ZIGBEE indoor positioning |
CN108632599A (en) * | 2018-03-30 | 2018-10-09 | 蒋昊涵 | A kind of display control program and its display control method of VR images |
CN108663799A (en) * | 2018-03-30 | 2018-10-16 | 蒋昊涵 | A kind of display control program and its display control method of VR images |
CN108710870A (en) * | 2018-07-26 | 2018-10-26 | 苏州随闻智能科技有限公司 | Intelligent wearable device and Intelligent worn device system |
WO2018232630A1 (en) * | 2017-06-21 | 2018-12-27 | 深圳市柔宇科技有限公司 | 3d image preprocessing method, device and head-mounted display device |
CN112731665A (en) * | 2020-12-31 | 2021-04-30 | 中国人民解放军32181部队 | Self-adaptive binocular stereoscopic vision low-light night vision head-mounted system |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092355B (en) * | 2017-04-07 | 2023-09-22 | 北京小鸟看看科技有限公司 | Method, device and system for controlling content output position of mobile terminal in VR (virtual reality) headset |
CN112101275B (en) * | 2020-09-24 | 2022-03-04 | 广州云从洪荒智能科技有限公司 | Human face detection method, device, equipment and medium for multi-view camera |
CN112890761A (en) * | 2020-11-27 | 2021-06-04 | 成都怡康科技有限公司 | Vision test prompting method and wearable device |
CN112914494A (en) * | 2020-11-27 | 2021-06-08 | 成都怡康科技有限公司 | Vision test method based on visual target self-adaptive adjustment and wearable device |
CN114252235A (en) * | 2021-11-30 | 2022-03-29 | 青岛歌尔声学科技有限公司 | Detection method and device for head-mounted display equipment, head-mounted display equipment and medium |
CN114757829A (en) * | 2022-04-25 | 2022-07-15 | 歌尔股份有限公司 | Shooting calibration method, system, equipment and storage medium |
CN117351074B (en) * | 2023-08-31 | 2024-06-11 | 中国科学院软件研究所 | Viewpoint position detection method and device based on head-mounted eye tracker and depth camera |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11202256A (en) * | 1998-01-20 | 1999-07-30 | Ricoh Co Ltd | Head-mounting type image display device |
CN103336575A (en) * | 2013-06-27 | 2013-10-02 | 深圳先进技术研究院 | Man-machine interaction intelligent glasses system and interaction method |
CN103487938A (en) * | 2013-08-28 | 2014-01-01 | 成都理想境界科技有限公司 | Head mounted display |
CN103499886A (en) * | 2013-09-30 | 2014-01-08 | 北京智谷睿拓技术服务有限公司 | Imaging device and method |
CN104076513A (en) * | 2013-03-26 | 2014-10-01 | 精工爱普生株式会社 | Head-mounted display device, control method of head-mounted display device, and display system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05328408A (en) * | 1992-05-26 | 1993-12-10 | Olympus Optical Co Ltd | Head mounted display device |
US9342610B2 (en) * | 2011-08-25 | 2016-05-17 | Microsoft Technology Licensing, Llc | Portals: registered objects as virtualized, personalized displays |
US20130088413A1 (en) * | 2011-10-05 | 2013-04-11 | Google Inc. | Method to Autofocus on Near-Eye Display |
-
2015
- 2015-08-07 WO PCT/CN2015/086360 patent/WO2016115874A1/en active Application Filing
- 2015-08-07 CN CN201510487699.3A patent/CN106199964B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11202256A (en) * | 1998-01-20 | 1999-07-30 | Ricoh Co Ltd | Head-mounting type image display device |
CN104076513A (en) * | 2013-03-26 | 2014-10-01 | 精工爱普生株式会社 | Head-mounted display device, control method of head-mounted display device, and display system |
CN103336575A (en) * | 2013-06-27 | 2013-10-02 | 深圳先进技术研究院 | Man-machine interaction intelligent glasses system and interaction method |
CN103487938A (en) * | 2013-08-28 | 2014-01-01 | 成都理想境界科技有限公司 | Head mounted display |
CN103499886A (en) * | 2013-09-30 | 2014-01-08 | 北京智谷睿拓技术服务有限公司 | Imaging device and method |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107116555A (en) * | 2017-05-27 | 2017-09-01 | 芜湖星途机器人科技有限公司 | Robot guiding movement system based on wireless ZIGBEE indoor positioning |
WO2018232630A1 (en) * | 2017-06-21 | 2018-12-27 | 深圳市柔宇科技有限公司 | 3d image preprocessing method, device and head-mounted display device |
CN108632599A (en) * | 2018-03-30 | 2018-10-09 | 蒋昊涵 | A kind of display control program and its display control method of VR images |
CN108663799A (en) * | 2018-03-30 | 2018-10-16 | 蒋昊涵 | A kind of display control program and its display control method of VR images |
CN108710870A (en) * | 2018-07-26 | 2018-10-26 | 苏州随闻智能科技有限公司 | Intelligent wearable device and Intelligent worn device system |
CN112731665A (en) * | 2020-12-31 | 2021-04-30 | 中国人民解放军32181部队 | Self-adaptive binocular stereoscopic vision low-light night vision head-mounted system |
CN112731665B (en) * | 2020-12-31 | 2022-11-01 | 中国人民解放军32181部队 | Self-adaptive binocular stereoscopic vision low-light night vision head-mounted system |
Also Published As
Publication number | Publication date |
---|---|
CN106199964B (en) | 2019-06-21 |
WO2016115874A1 (en) | 2016-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106199964A (en) | Binocular AR helmet and the depth of field control method of the depth of field can be automatically adjusted | |
US11953692B1 (en) | System for and method of projecting augmentation imagery in a head-mounted display | |
CN105812777B (en) | Binocular AR wears display device and its method for information display | |
US10937129B1 (en) | Autofocus virtual reality headset | |
CN105812778A (en) | Binocular AR head-mounted display device and information display method therefor | |
CN103353663B (en) | Imaging adjusting apparatus and method | |
US6359601B1 (en) | Method and apparatus for eye tracking | |
CN103595986B (en) | Stereoscopic image display device, image processing device, and image processing method | |
US9678345B1 (en) | Dynamic vergence correction in binocular displays | |
US10623721B2 (en) | Methods and systems for multiple access to a single hardware data stream | |
CN105866949A (en) | Binocular AR (Augmented Reality) head-mounted device capable of automatically adjusting scene depth and scene depth adjusting method | |
CN103500446B (en) | A kind of head-wearing display device | |
US20180007350A1 (en) | Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor | |
US11022835B2 (en) | Optical system using segmented phase profile liquid crystal lenses | |
US11630507B2 (en) | Viewing system with interpupillary distance compensation based on head motion | |
CN105872527A (en) | Binocular AR (Augmented Reality) head-mounted display device and information display method thereof | |
Jun et al. | A calibration method for optical see-through head-mounted displays with a depth camera | |
CN105866948A (en) | Method of adjusting virtual image projection distance and angle on binocular head-mounted device | |
US10698218B1 (en) | Display system with oscillating element | |
KR20140047620A (en) | Interactive user interface for stereoscopic effect adjustment | |
Viertler et al. | Dynamic registration of an optical see-through HMD into a wide field-of-view rotorcraft flight simulation environment | |
Son et al. | A HMD for users with any interocular distance | |
US10623743B1 (en) | Compression of captured images including light captured from locations on a device or object | |
CN118055324A (en) | Desktop AR positioning system, method, device, equipment and storage medium | |
CN117499613A (en) | Method for preventing 3D dizziness for tripod head device and tripod head device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |