CN209311783U - Headset equipment - Google Patents
Headset equipment Download PDFInfo
- Publication number
- CN209311783U CN209311783U CN201821588604.2U CN201821588604U CN209311783U CN 209311783 U CN209311783 U CN 209311783U CN 201821588604 U CN201821588604 U CN 201821588604U CN 209311783 U CN209311783 U CN 209311783U
- Authority
- CN
- China
- Prior art keywords
- pixel density
- headset equipment
- described image
- camera
- camera lens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0147—Head-up displays characterised by optical features comprising a device modifying the resolution of the displayed image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The utility model provides a kind of headset equipment.The headset equipment is included wear-type support construction, is supported by wear-type support construction and is configured as the display display of image, the camera model including imaging sensor and optical component, is configured as obtaining watching tracing system attentively and being configured as being positioned the control circuit of optical component based on blinkpunkt information for blinkpunkt information, wherein imaging sensor is configured as capturing image based on the incident light from real world objects, and optical component is plugged in the optical path of incident light.
Description
Technical field
The utility model relates generally to headset equipment, and more particularly, is related to display and image sensing
The headset equipment of device.
Background technique
Electronic equipment generally includes display and imaging sensor.In particular, when being viewer's display of high resolution figure
When picture, show that image is likely to become burden in whole display with complete resolution ratio.Visual focusing technology is related to only with complete
Resolution ratio shows the key component of image, and can help to reduce the burden in display system.It in some cases, can be aobvious
Show the image that user environment is shown on device.However, it may be difficult to obtain the high-resolution of the entire environment of user using imaging sensor
Rate image.
Utility model content
Electronic equipment such as headset equipment can have display.In some cases, display can be transparence display
The figure that device allows user to observe real world objects by display, while being generated by the way that computer is presented over the display
The content that computer generates is covered on the top of real world objects by picture.Display can also be opaque display, this is not
Transparent display stops the light from real world objects in user's operation headset equipment.In such arrangement
In, perspective camera can be used that real world objects are shown to user.
Perspective camera can capture the image of real world, and real world images be displayed on display for
Family is checked.The content (such as text, game content, other vision contents etc.) that other computer generates optionally is covered on very
On real world picture, to provide augmented reality environment for user.
Display can be visual focusing display.Using the tracing system of watching attentively in headset equipment, equipment be can determine
Which of display is partially directly viewable by user.It is compared with the display part being directly viewable, user is to positioned at user periphery
Artifact in display part and low resolution in vision will be less sensitive.Therefore, equipment can be shown with different resolution and be schemed
The different piece of picture.
Perspective camera can be captured for some high resolution image datas shown over the display.However, in display
On user visual field periphery in show low-resolution image, may only need low resolution image data.Therefore, perspective camera can
Only capture corresponds to the high-definition picture for the user's view sections being just directly viewable, and can capture corresponding to user periphery
The lower resolution image data of real world objects in vision.Perspective camera is adjusted only to capture the selected portion in the user visual field
High resolution image data in point, can reduce the processing load and power consumption in headset equipment.
There are a variety of possible arrangements, these arrangements, and camera model to be allowed selectively to capture height for perspective camera
Image in different resolution.For example, forward direction camera may include imaging sensor, distortion camera lens with two or more pixel densities
And/or one or more planes or curved reflector.Either one or two of component in adjustable camera model, with change scene which
Part is captured with high resolution image data.
According to an embodiment, provide a kind of headset equipment, the headset equipment include wear-type support construction,
It is supported by wear-type support construction and is configured as the display of display image, the phase including imaging sensor and optical component
Machine module is configured as obtaining watching tracing system attentively and being configured as positioning based on blinkpunkt information for blinkpunkt information
The control circuit of optical component, wherein imaging sensor is configured as capturing figure based on the incident light from real world objects
Picture, optical component are plugged in the optical path of incident light.
According to an embodiment, provide a kind of headset equipment, the headset equipment include wear-type support construction,
Camera model including imaging sensor is configured as obtaining watching tracing system attentively and being configured as base for blinkpunkt information
The positioning device of the position of described image sensor is adjusted in the blinkpunkt information, wherein described image sensor quilt
The incident light from real world objects is configured to generate the first image data with first resolution and have the
Second image data of two resolution ratio, the second resolution are higher than the first resolution.
According to an embodiment, provide a kind of headset equipment, the headset equipment include wear-type support construction,
It is configured as the camera model of the image of capture real-world scene, is supported and is configured as by the wear-type support construction
It shows the display of the described image of the real-world scene, supported by the wear-type support construction and obtain blinkpunkt
Information watches tracing system attentively and is configured as adjusting the component in the camera model based on the blinkpunkt information
Positioning device, wherein the described image of the real-world scene has first resolution part and second resolution part, institute
The height of first resolution part described in the resolution ratio of second resolution part is stated, and adjusts the component and adjusts the true generation
Boundary's scene which partially correspond to the real-world scene described image the second resolution part.
Detailed description of the invention
Fig. 1 is the schematic diagram according to the illustrative headset equipment of embodiment.
Fig. 2 is the top view according to the illustrative headset equipment of embodiment.
Fig. 3 be show can how in the first part in the visual field of user display of high resolution images, and low resolution
Image can be shown in the figure in the second part in the user visual field according to embodiment.
Fig. 4 is the illustrative camera model according to the imaging sensor including modified pixel density of embodiment
Cross-sectional side view, the camera model pass through positioning device position.
Fig. 5 is the top view according to the illustrative imaging sensor for the type for including in Fig. 4 of embodiment.
Fig. 6 is according to the cross-sectional side view of the illustrative camera model including distortion camera lens of embodiment, the camera
Module is positioned by positioning device.
Fig. 7 is according to the cross-sectional side view of the illustrative camera model including curved reflector of embodiment, the phase
Machine module is positioned by positioning device.
Fig. 8 is the imaging sensor and deformable mirror including modified pixel density according to embodiment
The cross-sectional side view of illustrative camera model.
Fig. 9 be according to embodiment include imaging sensor and deformable mirror with fixed pixel density
The cross-sectional side view of illustrative camera model.
Figure 10 is the imaging sensor and plane mirror including modified pixel density according to embodiment
The cross-sectional side view of illustrative camera model, the camera model are positioned by positioning device.
Figure 11 be according to embodiment include imaging sensor and plane mirror with fixed pixel density
The cross-sectional side view of illustrative camera model, the camera model are positioned by positioning device.
Figure 12 is according to the cross-sectional side view of the illustrative camera model including camera lens of embodiment, the camera model
It is positioned by positioning device.
Figure 13 is the illustrative of imaging sensor according to embodiment including modified pixel density and camera lens
The cross-sectional side view of camera model, the camera model are positioned by positioning device.
Figure 14 is the imaging sensor including curved reflector and modified pixel density according to embodiment
The cross-sectional side view of illustrative camera model, the camera model are positioned by positioning device.
Figure 15 is the cross according to the illustrative camera model in the shell including imaging sensor and camera lens of embodiment
Side cross-sectional view, the camera model are positioned by positioning device.
Figure 16 is according to the cross-sectional side view of the illustrative camera model of embodiment, which includes
For capturing the first imaging sensor of high-definition picture, for capturing the second imaging sensor of low-resolution image and dividing
Beam device.
Figure 17 is according to the cross-sectional side view of the illustrative camera model of embodiment, which includes
Imaging sensor and changeable shape are for controlling how direct the light to the camera lens of imaging sensor.
Figure 18 is the flow chart for being related to operating the exemplary operation of headset equipment according to embodiment, the headset equipment
With watching tracing system and forward direction camera attentively.
Specific embodiment
Patent application claims on September 13rd, the 2018 U.S. Patent application No.16/130,775 submitted and in
The temporary patent application No.62/662 that on April 25th, 2018 submits, 410 priority, these patent applications are accordingly in full to draw
It is incorporated herein with mode.
Headset equipment and other equipment can be used for virtual reality and augmented reality system.These equipment may include portable
Consumer electronic devices (for example, portable electronic device, such as mobile phone, tablet computer, glasses, other wearable devices),
Crown display in cockpit and vehicle, equipment based on display (such as projector, television set) etc..Such as these
Equipment may include transparent display and other optical components.To wherein virtual reality and/or augmented reality content and being supplied to has
The device configurations of the user of head-mounted display are described herein as example.However, being only for illustrative.It is any suitable
Equipment can be used for providing a user the content of virtual reality and/or augmented reality.
The headset equipment being worn in user's head, which can be used for providing a user, to be covered at the top of real world content
Computer generate content.Using some headset equipments, real world content can be directly viewable (for example, by saturating by user
Bright display panel observes real world objects, or is observed by the optical coupler in transparence display system, this is optical coupled
Device by from real world objects light with from display panel it is photosynthetic simultaneously).Other headset equipments can be used wherein true
The image of world object is captured by forward direction camera and is shown to the configuration of user over the display.Capture the image of real world
And show that the forward direction camera of the image is referred to alternatively as perspective camera over the display.
Perspective camera can capture high-definition picture to be shown to user.However, and the display part that is directly viewable
Compare, user to be located at user's peripheral vision in display part in artifact and low resolution it is less sensitive.Therefore, in order to
Processing load and power consumption involved in operation perspective camera are reduced, perspective camera can be only directly viewable corresponding to user in capture
Position high-definition picture.The other parts (peripheral vision corresponding to user) of captured image can have lower resolution
Rate.
The schematic diagram of illustrative headset equipment is shown in FIG. 1.As shown in Figure 1, headset equipment 10 is (sometimes referred to as
Head-mounted display 10) there can be control circuit 50.Control circuit 50 may include the operation for controlling headset equipment 10
Store and process circuit.Circuit 50 may include storage equipment, such as hard drive storage device, nonvolatile memory
(for example, electric programmable read-only memory that configuration is shaped to solid state drive), volatile memory are (for example, either statically or dynamically
Random access memory) etc..Processing circuit in control circuit 50 can based on one or more microprocessor, microcontroller,
Digital signal processor, baseband processor, power management unit, audio chip, graphics processing unit, specific integrated circuit and
Other integrated circuits.Software code is storable on the memory in circuit 50, and transports on the processing circuit in circuit 50
Row, with realize for headset equipment 10 control operation (for example, data acquisition operations, be related to using control Signal Regulation portion
The operation etc. of part).
Headset equipment 10 may include imput output circuit 52.Imput output circuit 52 can be used for allowing by headset equipment
10 from external equipment (for example, fastening computer, portable device such as handheld device or laptop computer or other electrical equipments)
Data are received, and user is allowed to provide user's input to headset equipment 10.Imput output circuit 52, which can also be used to collect, to be had
Close the information in the environment for wherein operating headset equipment 10.Output block in circuit 52 allow headset equipment 10 to
Family provides output, and can be used for communicating with external electrical equipment.
As shown in Figure 1, imput output circuit 52 may include display such as display 26.Display 26 can be used for wearing
The user of formula equipment 10 shows image.Display 26 can be transparent display, user be observed by display true
Real world object, while the image generated by the way that computer is presented over the display, are covered on true generation for the content that computer generates
On the top of boundary's object.Transparent display can be by transparent pixels array (for example, transparent organic light emitting diode display panel) shape
At, or can be formed by display equipment, the display equipment by beam splitter, holographic coupler or other optical couplers (for example,
Liquid crystal on display equipment such as silicon display) provide a user image.Alternatively, display 26 can be opaque display
Device, the opaque display stop the light from real world objects in user's operation headset equipment 10.In this seed type
Arrangement in, perspective camera can be used that real world objects are shown to user.Perspective camera can capture real world
Image, and real world images are displayed on display so that user checks.Content (the example that other computer generates
Such as text, game content, other vision contents) optionally it is covered in real world images, to provide increasing for user
Strong actual environment.When display 26 is opaque, display also optionally show completely by computer generate content (for example,
Do not show real world images), to provide reality environment for user.
Headset equipment optionally includes the adjusting part with 26 series stack of display.For example, headset equipment
It may include that (for example, the polarizer with switch, which allows the selection area of adjustable polarizer to adjustable polarizer
Be configured to act as perpendicular through linear polarization, it is horizontal by linear polarization or unpolarized region), tunable camera lens (example
Such as, liquid crystal tunable camera lens, the tunable camera lens based on electrooptical material, tunable liquid lens, MEMS is tunable mirror
Head or other tunable camera lenses), (for example, adjustable colour cast optical filter, which can quilt for adjustable filter
It adjusts to show different colour casts;And/or the monochromatic intensity tunable optical filter with single colour cast), and/or it is adjustable impermeable
Lightness system (for example, layer with adjustable opacity, for providing dark background when display is transparent).
Can have in display 26 any appropriate number of display picture element (for example, 0-1000,10-10,000,1000-1,000,
000,1,000,000 to 10,000,000, more than 1,000,000, less than 1,000,000, less than 10,000, be less than 100 etc.).
Imput output circuit 52 may include component, such as collecting data and user's input and for providing a user
The input-output equipment 60 of output.Equipment 60 may include watching tracker attentively and such as watching tracker 62 attentively (sometimes referred to as to watch attentively and chase after
Track system watches tracking camera attentively) and camera such as camera 64.
Watch attentively tracker 62 may include camera and/or it is other watch attentively tracing system component (for example, transmitting light beam light source,
So that the reflection of the detectable light beam from eyes of user) to monitor the eyes of user.One or more watches tracker 62 attentively can
User oriented eyes and traceable user's watches attentively.The camera watched attentively in tracing system can determine the position of eyes of user
(for example, center of the pupil of user), it may be determined that the wherein differently- oriented directivity (direction of gaze of user) of the eyes of user, it may be determined that
The pupil size of user is (for example, making light modulation and/or other optical parameters, and/or for spatially adjusting these parameters
One or more of in proper order amount, and/or in the region for wherein adjusting one or more of these optical parameters, these areas
Domain is adjusted based on pupil size), it can be used for monitoring current focus of the camera lens in eyes of user (for example, user is to focus on
Near field or far field, this can be used for assessing user be have a daydream or it is strategic or strategically think deeply), and/or its
It watches information attentively.The camera watched attentively in tracing system is referred to alternatively as interior to camera, gaze detection camera, eye tracks phase sometimes
Machine watches tracking camera or eyes monitoring camera attentively.If desired, also can be used other types of imaging sensor (for example, infrared
And/or visible light emitting diode and photodetector etc.) monitor watching attentively for user.Watch the gaze detection phase in tracker 62 attentively
What the use of machine was merely an illustrative.
Camera is such as one or more preceding available to camera 64 (to camera model 64 or camera model 64 before sometimes referred to as)
The image of real world around capture user.For example, before can be used for capturing user to camera 64 before one or more
The image of real world objects in the left and right side in face and the user visual field.The real world objects collected in this way
Image can be presented to the user on the display 26, and/or can by control circuit 50 handle with determine electronic equipment (for example,
Display etc.), the position of people, building and other real world objects relative to user.Image processing algorithm point also can be used
Analyse real world.Information from camera 64 can be used for controlling display 26.
Forward direction camera 64 can be used as obtaining the perspective camera of the image of user's real world.Then on the display 26
Display corresponds to the real world images (position by watching tracker and headset equipment attentively determines) in the user visual field.It uses as a result,
Family is perceived them and checks real world (to be checked using perspective camera and display, thus to replicate real world).
In addition to based on the component for adjusting such as display 26 from the information for watching tracker 62 and/or forward direction camera 64 attentively
Except, control circuit 50 collects sensing data and user's input from other imput output circuits 52, for controlling
Headset equipment 10.As shown in Figure 1, input-output equipment 60 may include position and motion sensor 66 (for example, compass, gyro
Instrument, accelerometer, and/or other equipment of position, orientation and movement for monitoring headset equipment 10, satellite navigation system
Global positioning system circuit etc. of the circuit for monitoring user location).For example, sensor 66 can be used to supervise for control circuit 50
It surveys wherein user's head and relative to what ambient enviroment was orientated works as front direction.Sensor 66 can also be used to monitor the movement of user's head
(for example, to the left and/or moving right, to track the object on screen and/or check other real world objects).
Input-output equipment 60 may also include other sensors and inputoutput unit 70 (for example, ambient light sensor, power
Sensor, temperature sensor, touch sensor, button, capacitive proximity sensor, the proximity sensor based on light, Qi Tajie
Nearly sensor, deformeter, gas sensor, pressure sensor, humidity sensor, Magnetic Sensor, microphone, loudspeaker, audio
Component, haptic output devices, light emitting diode, other light sources etc.).Circuit 52 may include wired and wireless telecommunication circuit 74, should
Wired and wireless telecommunication circuit allows headset equipment 10 (for example, control circuit 50) and external equipment (for example, remote controler, behaviour
Vertical pole and other input controllers, portable electronic device, computer, display etc.) communication, and allow in headset equipment 10
Different location at component (circuit) between transmit signal.Headset equipment 10 may include any other required component.Example
Such as, headset equipment may include battery.
The component of headset equipment 10 can be by illustrative support construction 16 of head installable support construction such as Fig. 2
Support.Support construction 16 can have the shape (for example, left and right temple and other framing components) of the frame of a pair of of glasses, can have
Helmet shape, or can have the installable configuration in another head.When on the head for being worn on user, user can be in display 26
Real world objects such as object 30 is checked to penetrate display 26 in the configuration of transparent display.Display 26 is impermeable wherein
In bright configuration, the eyes 12 of user, which can be blocked from, checks object 30.Display 26 is supported by support construction 16, and
When on the head for being worn on user, it is placed on before eyes of user 12.
Support construction 16 can support other component at other position such as position 38,40 and 42.For example, component can
It is mounted on before the support construction 16 in position 38.Forward direction camera 64 and/or sensor in imput output circuit 52 and
Other components are mountable in position 38.Component in position 38 can be used for detecting real world objects (for example, object 30)
Position and/or image for capturing real world.Object 30 may include natural and culture, people, building, dazzle source are all
Such as reflect object, the sun, lamp.
Such as position of input-output equipment 60 and motion sensor 66, photodetector or other required input-output equipment
It is mountable in position 40.Component in position 40 can user oriented environment (for example, the portion of the facing external far from user
Part).In contrast, the component in position 42 can user oriented (for example, user oriented towards internal component).Input and output
Equipment 60 such as watches tracker 62 (imaging sensor), loudspeaker (such as earpiece) or the other acoustic components for playing audio attentively
(for example, the image and/or the associated audio of other contents that generate with computer, and/or using display 26 show it is other in
Hold etc.) or other required input-output equipment, it is mountable in position 42.
Display 26 can be visual focusing display.Using watch attentively tracking (for example, using watch attentively tracker 62 capture about
The information of the position watched attentively of user on the display 26), equipment 10 can determine which part of display 26 only by user's
Peripheral vision is checked and which part directly (non-circumferentially) of display 26 checked by user (for example, with wherein assess
In the most intermediate 5o in the small recessed corresponding user visual field of the eyes of user of visual acuity).With the part phase for the display 26 being directly viewable
Than, user in the part for the display 26 being located in user's peripheral vision artifact and low resolution it is less sensitive.Therefore, if
Standby 10 can show the different piece of image with different resolution.
Fig. 3 shows the visual field 90 when wearing headset equipment 10 corresponding to the visual field of user.User can be in display
It is checked at 26 region 94.Therefore, the image in region 94 on display can be presented with respect to high-resolution.If desired, figure
As that can be presented over the display with high-resolution across the whole visual field of user.However, in order to save processing load and power consumption,
User is not directly viewable the display area such as region 92 of (for example, peripheral vision of user), and low-resolution image can be presented
(for example, with lower resolution ratio of resolution ratio than region 94).
In some cases (for example, when equipment is in perspective mode), head is being not present in the display of display 26 and user
The corresponding real world images seen when wearing formula equipment.When equipment is in perspective mode, whole display can be shown
By the real world images of camera (for example, forward direction camera 64 in Fig. 1) capture in equipment.In this mode, display can
High-definition picture corresponding with real world is presented in region 94.Therefore, forward direction camera 64 allows for capture high-resolution
Rate image.However, it is only necessary to low resolution image data is to show low-resolution image in region 92.
If desired, forward direction camera 64 can only capture high-definition picture.Then control circuit 50 can handle image data
High-definition picture to be presented in region 94, while lower resolution image is presented in region 92.In other words, it is captured
High resolution image data in some discarded so that lower resolution image is presented in region 92.But capture is extra
Image data (will finally be discarded) valuable processing and electric power resource may be occupied.Therefore, forward direction camera 64 does not capture more
Remaining high resolution image data, but can only capture the high-resolution for corresponding to the part in the user visual field being directly viewable
Rate image.The capture of forward direction camera 64 corresponds to the lower resolution image number of the real world objects in the peripheral vision of user
According to.The high resolution image data in selected part before adjusting to camera 64 only to capture the user visual field, can reduce wear-type
Processing load and power consumption in equipment 10.
There are multiple possible arrangements, these arrangements for camera model 64 (sometimes referred to as facing external camera or imaging system)
Camera model is allowed selectively to capture high-definition picture.For example, forward direction camera may include having two or more pixels
Imaging sensor, distortion camera lens and/or the one or more planes or curved reflector of density.Component in adjustable camera model
Either one or two of, with change scene which is partially captured with high resolution image data.
Fig. 4 be the illustrative camera model 64 with imaging sensor cross-sectional side view, described image sensor across
Sensor has non-constant pixel density.As shown in figure 4, camera model 64 include have the first pixel density part 103A and
The imaging sensor 102 of second pixel density part 103B.First pixel density part 103A and the second pixel density part
103B has different respective pixel density.In particular, the second pixel density part 103B of imaging sensor 102 can have
The pixel density bigger than the first pixel density part 103A.Therefore, the second pixel density part 103B is referred to alternatively as high pixel
Density portion 103B, and the first pixel density part 103A is referred to alternatively as low pixel density portion 103A.High pixel density portion
Divide 103B that there can be per inch pixel numbers (PPI) more more than low pixel density portion 103A.High pixel density part 103B will
Captured high resolution image data more higher than low pixel density portion 103A.
Camera model 64 may include one or more camera lenses, such as camera lens 104, for that will correspond to the true generation captured
The incident light of boundary's scene (for example, light 80) focuses on imaging sensor 102.Some incident lights are (for example, the of institute's capturing scenes
A part) it will be received by the high pixel density part 103B of imaging sensor, and some incident lights (for example, institute's capturing scenes
Second part) it will be received by the low pixel density portion 103A of imaging sensor.Therefore, first of the institute's capturing scenes obtained
Dividing will be high resolution image data, and the second part of obtained institute's capturing scenes is low resolution image data.
Camera model 64 may also include the positioning device 106 of the position for adjusting imaging sensor 102.In particular,
The position of imaging sensor 102 is adjusted to adjust which part of incident light (for example, institute capturing scenes in positioning device 106
Which part) it is imaged by the high pixel density part of imaging sensor.Arrow 108 shows imaging sensor can be how by fixed
Position 106 lateral shift of equipment (for example, in X/Y plane).Positioning device 106 can be based on sensor information (for example, from watching attentively
The information of tracker 62 and/or position and motion sensor 66) imaging sensor 102 is located in below camera lens 104.The sensing
Device information can be used for determining the blinkpunkt (for example, point that user checks) of user.Then the removable motion video of positioning device 106
Sensor 102 so that imaging sensor high pixel density part 103B receive correspond to user's blinkpunkt light (for example, with
The part for the scene that family is checked).
Positioning device 106 may include any desired component.For example, positioning device may include motor (such as servo motor,
Gear motor, brushless motor etc.), linear electromagnetic actuator (such as solenoid), piezoelectric device, electroactive polymer, pneumatic cause
Dynamic one or more of device and the actuator of any other suitable type.Positioning device 106 can be configured in X/Y plane
Mobile image sensor 102, along Z axis vertically move imaging sensor 102 and/or tilted image sensor 102 (so that
Imaging sensor is angled relative to X/Y plane).
If desired, the component of camera model 64 may be formed in shell 100 (sometimes referred to as camera model shell 100).
Shell 100 can support imaging sensor 102, camera lens 104 and/or positioning device 106.
Imaging sensor 102 can have the pixel region of increase to consider movement of the imaging sensor below camera lens 104.
Specifically it would be desirable to which the capture of imaging sensor 102 corresponds to all incident lights of institute's capturing scenes, without considering high pixel density
The position of part 103B.When high pixel density pixel portion 103B is in 104 centered beneath of camera lens (as shown in Figure 4), image is passed
The periphery of sensor (102P) may not receive incident light.However, it is contemplated that the example in Fig. 4, wherein imaging sensor is along X-axis transverse direction
It is displaced (for example, with high pixel density part 103B below the rightmost edges to homogeneous lens 104).The peripheral portion of imaging sensor
Divide movable to receive incident light (for example, leftmost side edge from camera lens 104) currently.Thereby it is ensured that imaging sensor
102 have the area (while sensor placed in the middle) bigger than area needed for capturing all incident lights, to ensure even if sensor
It is displaced by so that the high pixel density of sensor to be partially moved to the edge of institute's capturing scenes, all incident lights will be captured.
Fig. 5 is the top view of image sensor 102, the imaging sensor can have high pixel density region 103B and
Low pixel density area 103A.As shown in figure 5, low pixel density area 103A can laterally surround high pixel density region 103B.
The example is exemplary only.If desired, imaging sensor 102 may include the different pixels density region of any desired quantity
Domain, wherein each pixel density region has any desired shape and any desired pixel density.If desired, can be in phase
There is gradually transition between the pixel density in adjacent pixel density region.
What the example of the imaging sensor 102 in Fig. 4 and Fig. 5 with different pixels density area was merely an illustrative.Such as
Fruit needs, and camera model can be on the contrary using a part of distortion camera lens capturing scenes to amplify, to obtain institute's capturing scenes
Part high resolution image data.A kind of arrangement of the type is shown in FIG. 6.
As shown in fig. 6, camera model 64 includes imaging sensor 102, which has fixed pixel across itself
Density.Similar to Fig. 4, imaging sensor 102 can receive the incident light 80 from camera lens.However, light is provided in Fig. 4
Imaging sensor 102 with uniform angular resolution.In contrast, in Fig. 6, light is provided to modified angular resolution
Imaging sensor 102.In particular, comparing with the light on camera lens periphery, the light in optical center (for example) can be across image sensing
It propagates the bigger corresponding region of device 102.As shown in fig. 6, the light for corresponding to the first area 110 of camera lens may be propagated to image sensing
On the bigger region 112 of device.In other words, at the region of camera lens 104D 110 part of received institute's capturing scenes by camera lens
104D amplification.If the non-warping light of camera lens, received light is propagated at region 110 with more pixels.With more pixel captures
The image data of the same area of incident light, it is meant that image data has the image data of the other parts than imaging sensor
There is higher resolution.
In summary, camera lens 104D can distort incident light on the pixel region than non-warping smooth Shi Geng great optics stretch
The selected part of (for example, amplification) institute's capturing scenes is (for example, camera lens 104D selectively increases the selected part of institute's capturing scenes
Angular resolution).Therefore, imaging sensor obtains the high resolution image data of the selected part of institute's capturing scenes.It is captured
The rest part of scene does not stretch (and can be optical compression) by optics.Therefore, imaging sensor obtains captured field
The low resolution image data of the remainder of scape (there is resolution ratio at least more lower than high resolution image data).
Camera model 64 may also include the positioning device 106 of the position for adjusting camera lens 104D.In particular, positioning is set
Which part (for example, which part of institute's capturing scenes) quilt of incident light is adjusted for the position of 106 adjustable lens 104D
Lens optical is stretched to obtain high resolution image data.Arrow 108 shows camera lens can how lateral by positioning device 106
It is displaced (for example, in X/Y plane).Positioning device 106 can be configured to the mobile distortion camera lens 104D in X/Y plane, hang down along Z axis
Translation dynamical distortion camera lens 104D and/or tilt camera lens 104D (so that distortion camera lens is angled relative to X/Y plane).Positioning
Equipment 106 can be fixed based on sensor information (for example, from the information for watching tracker 62 and/or position and motion sensor 66 attentively)
Position distortion camera lens 104D.The sensor information can be used for determining the blinkpunkt (for example, point that user checks) of user.Positioning
Equipment 106 and then removable dynamical distortion camera lens 104D, so that the optics stretched portion (for example, region 110) of captured images is corresponding
In the blinkpunkt (for example, the part for the scene that user checks) of user.
In another embodiment, additional optical may include in camera model 64, so that imaging sensor 102
High resolution image data can be generated.As shown in fig. 7, reflecting mirror such as reflecting mirror 114 can be plugged on camera lens 104 and image passes
In optical path between sensor 102.Reflecting mirror 114 can have any desired shape (for example, curved or plane).Separately
Outside, if it is desired, more than one reflecting mirror (for example, reflection mirror array) can be included in camera lens 104 and imaging sensor 102
Between optical path in.
In Fig. 7, imaging sensor 102 can be the imaging sensor with fixed pixel density and (be similar to institute in Fig. 6
Show), and camera lens 104 may not be distort camera lens (for example, being similar to the camera lens 104 in Fig. 4).However, reflecting mirror 114 can distort
Incident image light (similar to the distortion camera lens 104D of Fig. 6).In other words, reflecting mirror 114 can distort the incidence from camera lens 104
Light is with the selected part of optics stretching (for example, amplification) institute's capturing scenes on the pixel region for not distorting Shi Geng great than light.Cause
This, imaging sensor obtains the high resolution image data of the selected part of institute's capturing scenes.The rest part of institute's capturing scenes
(and can be optical compression) is not stretched by optics.Therefore, imaging sensor obtains the remainder of institute's capturing scenes
Low resolution image data (has resolution ratio at least more lower than high resolution image data).
Camera model 64 may also include the positioning device 106 of the position for adjusting reflecting mirror 114.In particular, positioning
The position of reflecting mirror 114 is adjusted so which part (for example, which part of institute's capturing scenes) of incident light adjusted in equipment 106
It is stretched by mirror optics to obtain high resolution image data.Arrow 116 shows how reflecting mirror can pass through positioning device
106 rotations (for example, being rotated around central axis 118).Positioning device 106 is also configured to the mobile mirror in X/Y plane
114, reflecting mirror 114 and/or inclined mirror 114 are vertically moved along Z axis.Positioning device 106 can be based on sensor information
(for example, from the information for watching tracker 62 and/or position and motion sensor 66 attentively) positions reflecting mirror 114.Sensor letter
Breath can be used for determining the blinkpunkt (for example, point that user checks) of user.Positioning device 106 and then movable mirror
114, so that the optics stretched portions of captured images corresponds to the blinkpunkt of user (for example, the scene that user checks
Part).
In another embodiment, as shown in Figure 8, deformable mirror such as deformable mirror 120 can be planted
In optical path between camera lens 104 and imaging sensor 102.In fig. 8, imaging sensor 102 has two or more
A pixel density region, such as high pixel density region 103B and low pixel density area 103A.Deformable mirror 120 can be true
Determine institute's capturing scenes which be partially directed to high pixel density region 103B.In particular, deformable mirror 120 can have
There are two or more state, wherein the incident light 80 from camera lens 104 is directed to the different positions on imaging sensor 102
It sets.As shown in figure 8, deformable mirror 120 has first state and the second state, reflecting mirror has the in the first state
One shape 120A, reflecting mirror has the second shape 120B in second state.Positioning device 106 can different shape (such as
120A and 120B) between adjust deformable mirror 120, to control, which of capturing scenes is partially directed to image sensing
The high pixel density region 103B of device.
Positioning device 106 can be based on sensor information (for example, from tracker 62 and/or position and motion-sensing is watched attentively
The information of device 66) control deformable mirror 120 shape.The sensor information can be used for determining user blinkpunkt (for example,
The point that user checks).Then positioning device 106 can control the shape of deformable mirror 120, so that corresponding to user's
The captured images part of blinkpunkt (for example, the part for the scene that user checks) is directed to the high picture of imaging sensor
Plain density area.
The use of single reflecting mirror in Fig. 7 and Fig. 8 is only illustrative.In figures 7 and 8, reflecting mirror battle array can be used
Column are to re-direct light between camera lens 102 and imaging sensor 104.By positioning device 106, reflecting mirror can be independently controlled
Each reflecting mirror (for example, rotating as shown in Figure 7, or deformation as shown in Figure 8) in array.
Aforementioned exemplary is only illustrative, and can be carry out various modifications to camera model.In particular, in camera model
In the high distortion camera lens, camera lens (sometimes referred to as low distortion camera lens) without height distortion, deformable mirror, rotatable can be used
Reflecting mirror, the imaging sensor with constant pixel density and the imaging sensor with variable pixel density it is any desired
Combination.In addition, positioning device can move either one or two of component in camera model in any desired way.
Fig. 9 shows the camera model with camera lens 104 and the deformable mirror 120 controlled by positioning device 106
(similar to the camera model in Fig. 8).However, in fig. 8, the modified pixel density of imaging sensor 102, and in Fig. 9
In, imaging sensor 102 has fixed pixel density.In Fig. 9, camera lens 104 or reflecting mirror 120 can optics stretch incident light with
Generate high resolution image data.For example, camera lens 104 can be high distortion camera lens (such as Fig. 6 of a part of amplification institute's capturing scenes
Shown in).Alternatively, reflecting mirror 120 can distort the selected part (similar in conjunction with described in Fig. 7) of institute's capturing scenes.Positioning is set
The standby shape of 106 controllable deformable mirrors 120 and/or the position of controllable camera lens 104.Positioning device 106 can be based on biography
Sensor information (for example, from the information for watching tracker 62 and/or position and motion sensor 66 attentively) controls in camera model 64
Component.
In another embodiment, plane mirror such as plane mirror 114 can be plugged on camera lens 104 and image passes
In optical path between sensor 102, as shown in Figure 10.In this embodiment, camera lens 104 can be low distortion camera lens,
And plane mirror 114 may non-warping incident light.Therefore, imaging sensor 102 can be for high pixel density part
The variable pixel density image sensor of 103B and low pixel density portion 103A, high resolution image data can be captured.
Plane mirror 114 is oriented the required part of institute's capturing scenes being directed to high pixel density part 103B.Captured field
The rest part of scape is directed to low pixel density portion 103A.Therefore, imaging sensor obtains the required portion of institute's capturing scenes
The low resolution image data of the remainder of the high resolution image data and institute's capturing scenes that divide (has and at least compares high score
The lower resolution ratio of resolution image data).
Camera model 64 may also include the positioning device 106 of the position for adjusting plane mirror 114.In particular,
The position of plane mirror 114 is adjusted to adjust which part of incident light (for example, institute capturing scenes in positioning device 106
Which part) it is received by high pixel density region 103B.Arrow 116 shows how reflecting mirror can be revolved by positioning device 106
Turn (for example, rotating around central axis 118).Positioning device 106 is also configured to mobile mirror 114 in X/Y plane,
Reflecting mirror 114 and/or inclined mirror 114 are vertically moved along Z axis.Positioning device 106 can be based on sensor information (example
Such as, from the information for watching tracker 62 and/or position and motion sensor 66 attentively) positioning reflecting mirror 114.The sensor information can
For determining the blinkpunkt (for example, point that user checks) of user.Positioning device 106 and then movable mirror 114, make
The part that the capture image of high pixel density region 103B must be directed to corresponds to the blinkpunkt of user (for example, user is
The part for the scene checked).
Figure 11 shows the embodiment similar to Figure 10 embodiment, and two of them embodiment all has rotatable flat
Face reflecting mirror 114.However, camera lens 104 is low distortion camera lens in Figure 10, and in Figure 11, distort camera lens 104D enlarged drawing
Selected part.As shown in figure 11, distortion camera lens 104D optics stretches a part of institute's capturing scenes (with the class in conjunction with described in Fig. 6
Like).Positioning device can be based on sensor information (for example, from the letter for watching tracker 62 and/or position and motion sensor 66 attentively
Breath) come control plane mirror 114 position and/or distort camera lens 104D position.Figure 11 shows close with fixed pixel
The imaging sensor of degree, but if needing, imaging sensor can modified pixel density.
The embodiment that Figure 12 is shown similar to the embodiment of Fig. 7 with camera lens 104, amplifies one of incident light
The reflecting mirror 114 and fixed pixel density image sensor 102 divided.Camera lens 104 can be provided to curved reflector 114 to be had
The light of uniform angular resolution.Then, reflecting mirror 114 amplifies a part of light, and re-directs towards imaging sensor 102
Light.However, in Fig. 7 positioning device control reflecting mirror 114 position with control scene which partially may be exaggerated for height
Resolution image data.In contrast, in Figure 12, positioning device 106 control camera lens 104 position with control scene which
Part is directed to the amplifier section of reflecting mirror 114.
Arrow 108 shows camera lens can be how by 106 lateral displacement of positioning device (for example, in X/Y plane).Positioning
Equipment 106 can be configured to moving lens 104 in X/Y plane, vertically move camera lens 104 and/or inclination camera lens 104 along Z axis
(so that camera lens is angled relative to X/Y plane).Positioning device 106 can be based on sensor information (for example, from tracker is watched attentively
62 and/or position and motion sensor 66 information) positioning camera lens 104.The sensor information can be used for determining watching attentively for user
Point (for example, point that user checks).Positioning device 106 and then movable lens 104, so that being directed to reflecting mirror 114
Amplifier section captured images part correspond to user blinkpunkt (for example, the portion for the scene that user checks
Point).
Figure 13 shows the embodiment of the embodiment similar to Fig. 4, with camera lens 104 and variable pixel density map
As sensor 102, which has high pixel density region 103B and low pixel density area
103A.Camera lens 104 can provide the light with uniform angular resolution to variable pixel density image sensor 102.However, in Fig. 4
In, the position of positioning device control imaging sensor 102 with control scene which partially connect by high pixel density region 103B
It receives.In contrast, in Figure 13, positioning device 106 control the position of camera lens 104 with control scene which be partially directed to
The high pixel density region 103B of imaging sensor.
Arrow 108 shows camera lens can be how by 106 lateral displacement of positioning device (for example, in X/Y plane).Positioning
Equipment 106 can be configured to moving lens 104 in X/Y plane, vertically move camera lens 104 and/or inclination camera lens 104 along Z axis
(so that camera lens is angled relative to X/Y plane).Positioning device 106 can be based on sensor information (for example, from tracker is watched attentively
62 and/or position and motion sensor 66 information) positioning camera lens 104.The sensor information can be used for determining watching attentively for user
Point (for example, point that user checks).Positioning device 106 and then movable lens 104, so that corresponding to the blinkpunkt of user
The captured images part of (for example, the part for the scene that user checks) is directed to the high pixel density of imaging sensor
Region 103B.
In another embodiment shown in Figure 14, camera model 64 may include the embodiment similar to Figure 10
Camera lens 104, reflecting mirror 114 and variable pixel density image sensor 102.In Figure 10, reflecting mirror 114 is plane, and
In Figure 14, reflecting mirror 114 is curved surface.Camera lens 104 in Figure 14 can be low distortion camera lens.Imaging sensor 102 can for
The variable pixel density image sensor of high pixel density part 103B and low pixel density portion 103A, high score can be captured
Resolution image data.Reflecting mirror 114 is oriented institute's capturing scenes being directed to imaging sensor.First of institute's capturing scenes
Divide and received and be imaged by high pixel density part 103B, and the remainder of institute's capturing scenes is by low pixel density portion 103A
It receives and is imaged.Therefore, imaging sensor obtains the high resolution image data of a part of institute's capturing scenes and is captured
The low resolution image data of the remainder of scene (there is resolution ratio at least more lower than high resolution image data).Curved surface
A part of the optionally enlarged drawing of reflecting mirror 114, in addition to increase the resolution ratio of image data.
Camera model 64 may also include the positioning device 106 of the position for adjusting imaging sensor 102.In particular,
The position of imaging sensor 102 is adjusted to adjust which part of incident light (for example, institute capturing scenes in positioning device 106
Which part) it is imaged by the high pixel density part of imaging sensor.Arrow 108 shows imaging sensor can be how by fixed
Position 106 lateral displacement of equipment (for example, in YZ plane).Positioning device 106 can be based on sensor information (for example, from watching attentively
The information of tracker 62 and/or position and motion sensor 66) positioning imaging sensor 102.The sensor information can be used for really
Determine the blinkpunkt (for example, point that user checks) of user.Then positioning device 106 moves imaging sensor 102, so that
The high pixel density part 103B of imaging sensor receives the light for corresponding to user's blinkpunkt (for example, the field that user checks
The part of scape).
Figure 15 shows another embodiment of camera model 64.In Figure 15, camera model 64 includes focusing the light into
High distortion camera lens 104D on imaging sensor 102.Shell 100 supports imaging sensor 102 and camera lens 104D.Image sensing
Device is fixed pixel density image sensor.A part for the camera lens 104D amplification institute's capturing scenes that distort, so as to be captured field
The part of scape obtains high resolution image data.In Fig. 6,106 moving lens 104D of positioning device capturing scenes to control
Which be partially amplified.In contrast, the positioning device 106 in Figure 15 changes position (such as such as arrow 124 of shell 100
It is shown) to control, which of capturing scenes is partially amplified.Positioning device 106 it is rotatable or displacement shell 100 position with
Change direction pointed by camera lens 104D and imaging sensor 102.Positioning device 106 can be based on sensor information (for example, coming from
Watch the information of tracker 62 and/or position and motion sensor 66 attentively) positioning shell 100.The sensor information can be used for determining
The blinkpunkt (for example, point that user checks) of user.Positioning device 106 and then movable housing 100, so that camera lens 104D
Amplification corresponds to the incident light (for example, the part for the scene that user checks) of user's blinkpunkt.
It is used together with static fixed pixel density image sensor in Figure 15 with static high distortion camera lens removable
The example of dynamic shell is only illustrative.In general, any of foregoing embodiments may include rotatable or displacement is outer
The positioning device of the position of shell 100.For example, movable housing as shown in figure 15 may include variable pixel density image sensing
Device, plane mirror, curved reflector, deformable mirror and/or low distortion camera lens, and any of these components
Position can be adjusted by positioning device.
In another embodiment shown in Figure 16, camera model 64 includes beam splitter, such as beam splitter 126.Point
Incident light 80 is separated on two imaging sensors by beam device 126 (for example, prism): imaging sensor 102H and imaging sensor
102L.Imaging sensor 102H can have resolution ratio more higher than imaging sensor 102L (for example, the more pixels of per inch
Number).Therefore, imaging sensor 102H is referred to alternatively as high-resolution image sensors sometimes, and imaging sensor 102L is sometimes
It is referred to alternatively as low-resolution image sensor.Control circuit (for example, control circuit 50 in Fig. 1) in headset equipment is movable
It selects to state to read the part which of high-resolution image sensors 102H and/or low-resolution image sensor 102L.
Then image data can be combined to be formed in required position with high resolution image data and have in remainder
The single image of low resolution image data.Control circuit 50 can be based on sensor information (for example, from tracker 62 is watched attentively
And/or the information of position and motion sensor 66) such as blinkpunkt information come select by each sensor which part read
Out.
Figure 17 shows another embodiments of camera model 64.In Figure 17, camera model includes deformable camera lens.
As shown in Figure 17, imaging sensor 102 and camera lens 128 are formed in the shell 100 of camera model 64.Camera lens 128 is variable
Shape camera lens (sometimes referred to as change in shape camera lens or adaptable camera lens).Controllable camera lens 128 (for example, passing through equipment 106)
With desired shape.For example, camera lens 128 can change between first shape 129A and the second shape 129B.Camera lens 128
Different shape can respectively have different angular resolution features, while keep identical focus.Institute's capturing scenes can focus as a result,
On to imaging sensor 102 (shape regardless of camera lens).However, the different shape of deformable camera lens allows to be captured
The different piece of scene is amplified (for high resolution image data).For example, when camera lens has shape 129A, the portion of camera lens
The first part (for example, increasing angular resolution relative to the light of camera lens part around) for dividing 130A that can amplify incident light.Work as mirror
When head has shape 129B, the part 130B (different from part 130A) of camera lens can amplify the second different piece of incident light.By
This, the shape of controllable camera lens is to select a part of incident light to carry out optics stretching.Therefore, it can control the shape of camera lens to obtain
Obtain the high resolution image data of the selected part of institute's capturing scenes.Camera lens can be in any required amount of shape (for example, two
It is a, three, four, more than four, more than ten, less than 20 etc.) between change, wherein each shape have passed by image
The associated resolution ratio for the image data that sensor 102 obtains.
Camera lens 128 can be formed in any desired manner, and which allows camera lens to change shape.For example, camera lens can be base
Change the liquid lens of shape in liquid volume.Camera lens may be based on the LCD lens that voltage changes shape.If desired, camera lens
It may include MEMS (MEMS).
Figure 18 is the exemplary methods that can be executed during the operation of all headset equipments 10 as shown in figure 1 of headset equipment
The flow chart of step.As shown in Figure 18, at step 202, from the beginning control circuit (for example, control circuit 50 in Fig. 1) can be
The input equipment worn in formula equipment collects information.Control circuit 50 collects the information from any desired input equipment.Example
Such as, control circuit 50, which collects to come from, watches tracking camera 62, position and motion sensor 66 or any other desired input attentively
The information of equipment.It may include blinkpunkt information (for example, instruction user is in the letter for checking where in the information that step 202 is collected
Breath).
Next, control circuit 50 can be based on the information obtained during step 202 (for example, blinkpunkt at step 204
Information) it is preceding to camera 64 to adjust.Control circuit can adjust preceding Xiang Xiangji (for example, passing through adjusting in any desired way
Outside the position of camera lens, the shape of camera lens, the position of reflecting mirror, the shape of reflecting mirror, the position of imaging sensor or camera model
The position of shell).Xiang Xiangji before control circuit is adjustable, so that preceding one for obtaining the scene for corresponding to user's blinkpunkt to camera
Point high resolution image data, and the periphery corresponding to the user visual field scene parts low resolution image data.?
To after camera before having adjusted, forward direction camera can capture the image data being subsequently displayed on the display 26 of headset equipment.
According to an embodiment, a kind of headset equipment is provided, the headset equipment includes wear-type support knot
Structure, the display for being supported by the wear-type support construction and being configured as display image including imaging sensor and optics
The camera model of component is configured as obtaining watching tracing system attentively and being configured as watching attentively based on described for blinkpunkt information
Information is put to position the control circuit of the optical component, wherein described image sensor is configured as based on from real world
The incident light of object captures described image, and the optical component is plugged in the optical path of the incident light.
According to another embodiment, the camera model includes camera lens, and the optical component is by light from the camera lens
Re-direct the reflecting mirror of described image sensor, described image sensor have the first pixel density of band first part and
Second part with the second pixel density, second pixel density are higher than first pixel density, and control electricity
Road is configured as the reflecting mirror being placed on a position, in the position, the blinkpunkt corresponding to user of the incident light
A part be directed to the second part of described image sensor.
According to another embodiment, the optical component is plane mirror, and the control circuit is configured as
The plane mirror is rotated based on the blinkpunkt information.
According to another embodiment, the optical component is deformable mirror, and the control circuit is configured
For the shape for controlling the deformable mirror based on the blinkpunkt information.
According to another embodiment, the optical component is camera lens.
According to another embodiment, the camera lens increases the angular resolution of a part of the incident light.
According to another embodiment, described image sensor has first part and the band second of the first pixel density of band
The second part of pixel density, second pixel density are higher than first pixel density.
According to another embodiment, positions the optical component and control which of the incident light and be partially directed to institute
State the second part of imaging sensor.
According to another embodiment, the control circuit is configured as the optical component being placed on a position,
The part of the position, the incident light of the blinkpunkt corresponding to user is directed to described in described image sensor
Second part.
According to another embodiment, the control circuit is configured as the optical component being placed on a position,
The position, described image sensor obtain the high resolution graphics of a part of the blinkpunkt corresponding to user of the incident light
As data.
According to another embodiment, the headset equipment includes positioning device, and the positioning device is configured as adjusting
The position of the optical component is saved, wherein the control circuit is configured with the positioning device to position the optical section
Part.
According to an embodiment, a kind of headset equipment is provided, the headset equipment includes wear-type support knot
Structure, the camera model including imaging sensor are configured as obtaining watching tracing system attentively and being configured as blinkpunkt information
The positioning device of the position of described image sensor is adjusted based on the blinkpunkt information, wherein described image sensor
It is configured as generating the first image data with first resolution based on the incident light from real world objects and has
Second image data of second resolution, the second resolution are higher than the first resolution.
According to another embodiment, described image sensor has first part and the band second of the first pixel density of band
The second part of pixel density, second pixel density be higher than first pixel density, described image sensor it is described
First part generates the first image data, and the second part of described image sensor generates second image
Data, and the positioning device is configured as described image sensor being placed on a position, in the position, the incidence
A part of the blinkpunkt corresponding to user of light is directed to the second part of described image sensor.
According to another embodiment, the camera model includes the mirror being plugged on the optical path of the incident light
Head.
According to another embodiment, the camera model includes reflecting mirror, the reflecting mirror be plugged on the camera lens and
On the optical path of the incident light between described image sensor.
According to another embodiment, described image sensor has first part and the band second of the first pixel density of band
The second part of pixel density, second pixel density are higher than first pixel density.
According to an embodiment, a kind of headset equipment is provided, the headset equipment includes wear-type support knot
Structure is configured as capturing the camera model of the image of real-world scene, is supported and matched by the wear-type support construction
It is set to the display for the described image for showing the real-world scene, is supported and is infused by the wear-type support construction
View information watches tracing system attentively and is configured as adjusting the portion in the camera model based on the blinkpunkt information
The positioning device of part, wherein the described image of the real-world scene has first resolution part and second resolution portion
Point, first resolution part described in the resolution ratio of the second resolution part is high, and adjusts described in the component adjusting
Real-world scene which partially correspond to the real-world scene described image the second resolution part.
According to another embodiment, the component is the imaging sensor of the camera model, described image sensor
The second part of the second pixel density of first part and band with the first pixel density of band, second pixel density are higher than institute
The first pixel density is stated, and adjusting the component includes mobile described image sensor.
According to another embodiment, the component is the reflecting mirror of the camera model.
According to another embodiment, the component is the camera lens of the camera model, and adjusts the component and include
Adjust the shape of the camera lens.
Foregoing teachings are exemplary only and can various modification can be adapted to the embodiment.Foregoing embodiments can be single
Solely implements or can implement in any combination.
Claims (20)
1. a kind of headset equipment, characterized by comprising:
Wear-type support construction;
Display, the display are supported by the wear-type support construction and are configured as display image;
Camera model, the camera model include imaging sensor and optical component, and described image sensor is configured as being based on
Incident light from real world objects captures described image, and the optical component is plugged on the optical path of the incident light
In;
Watch tracing system attentively, described to watch tracing system attentively and be configured as obtaining blinkpunkt information, the optical component can be based on
The blinkpunkt information is positioned.
2. headset equipment according to claim 1, wherein the camera model further includes camera lens, wherein the optical section
Part is that light re-directed the reflecting mirror to described image sensor from the camera lens, and wherein described image sensor has band the
The second part of the second pixel density of first part and band of one pixel density, second pixel density are higher than first picture
Plain density, and wherein the reflecting mirror is placed on a position, and in the position, the incident light corresponds to watching attentively for user
A part of point is directed to the second part of described image sensor.
3. headset equipment according to claim 1, wherein the optical component is plane mirror, and wherein described
Plane mirror is rotated based on the blinkpunkt information.
4. headset equipment according to claim 1, wherein the optical component is deformable mirror, and wherein institute
The shape for stating deformable mirror is controlled based on the blinkpunkt information.
5. headset equipment according to claim 1, wherein the optical component is camera lens.
6. headset equipment according to claim 5, wherein the camera lens increases the angle point of a part of the incident light
Resolution.
7. headset equipment according to claim 1, wherein described image sensor has the of the first pixel density of band
The second part of a part and the second pixel density of band, second pixel density are higher than first pixel density.
8. headset equipment according to claim 7, wherein position the optical component control the incident light which
Part is directed to the second part of described image sensor.
9. headset equipment according to claim 8, wherein the optical component is placed on a position, in the position,
The part of the incident light of blinkpunkt corresponding to user is directed to the second part of described image sensor.
10. headset equipment according to claim 1, wherein the optical component is placed on a position, in the position,
Described image sensor obtains the high resolution image data of a part of the blinkpunkt corresponding to user of the incident light.
11. headset equipment according to claim 1, further includes:
Positioning device, for positioning the optical component.
12. a kind of headset equipment, characterized by comprising:
Wear-type support construction;
Camera model, the camera model include imaging sensor, and wherein described image sensor is configured as based on from true
The incident light of real world object generates the first image data with first resolution and the second figure with second resolution
As data, the second resolution is higher than the first resolution;
Watch tracing system attentively, it is described to watch tracing system attentively and be configured as obtaining blinkpunkt information, the position of described image sensor
It can be conditioned based on the blinkpunkt information.
13. headset equipment according to claim 12, wherein described image sensor has the first pixel density of band
The second part of the second pixel density of first part and band, second pixel density are higher than first pixel density, wherein
The first part of described image sensor generates the first image data, and described the second of described image sensor
Part generates second image data, and wherein described image sensor is placed on a position, in the position, it is described enter
The a part for penetrating the blinkpunkt corresponding to user of light is directed to the second part of described image sensor.
14. headset equipment according to claim 12, wherein the camera model further includes being plugged on the incident light
Optical path on camera lens.
15. headset equipment according to claim 14, wherein the camera model further includes reflecting mirror, the reflecting mirror
It is plugged on the optical path of the incident light between the camera lens and described image sensor.
16. headset equipment according to claim 12, wherein described image sensor has the first pixel density of band
The second part of the second pixel density of first part and band, second pixel density are higher than first pixel density.
17. a kind of headset equipment, characterized by comprising:
Wear-type support construction;
Camera model, the camera model is configured as the image of capture real-world scene, wherein the real-world scene
Described image there is first resolution part and second resolution part, described in the resolution ratio of the second resolution part
First resolution part is high;
Display, the display is supported by the wear-type support construction, wherein the display is configured as described in display
The described image of real-world scene;
Watch tracing system attentively, it is described to watch tracing system attentively and supported by the wear-type support construction and obtained blinkpunkt information,
Component in the camera model can be adjusted to adjust the real-world scene based on the blinkpunkt information
Which partially corresponds to the second resolution part of the described image of the real-world scene.
18. headset equipment according to claim 17, wherein the component is the imaging sensor of the camera model,
Wherein described image sensor has the second part of the second pixel density of first part and band of the first pixel density of band, described
Second pixel density is higher than first pixel density, and wherein adjusting the component includes mobile described image sensor.
19. headset equipment according to claim 17, wherein the component is the reflecting mirror of the camera model.
20. headset equipment according to claim 17, wherein the component is the camera lens of the camera model, and its
The middle adjusting component includes the shape for adjusting the camera lens.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862662410P | 2018-04-25 | 2018-04-25 | |
US62/662,410 | 2018-04-25 | ||
US16/130,775 US10642049B2 (en) | 2018-04-25 | 2018-09-13 | Head-mounted device with active optical foveation |
US16/130,775 | 2018-09-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN209311783U true CN209311783U (en) | 2019-08-27 |
Family
ID=63858052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201821588604.2U Active CN209311783U (en) | 2018-04-25 | 2018-09-28 | Headset equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN209311783U (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113966482A (en) * | 2019-11-22 | 2022-01-21 | 苹果公司 | Display system having a plurality of light paths for performing a recess |
CN114450942A (en) * | 2019-09-30 | 2022-05-06 | 京瓷株式会社 | Camera, head-up display system, and moving object |
-
2018
- 2018-09-28 CN CN201821588604.2U patent/CN209311783U/en active Active
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114450942A (en) * | 2019-09-30 | 2022-05-06 | 京瓷株式会社 | Camera, head-up display system, and moving object |
EP4040788A4 (en) * | 2019-09-30 | 2023-11-01 | Kyocera Corporation | Camera, head-up display system, and mobile body |
CN113966482A (en) * | 2019-11-22 | 2022-01-21 | 苹果公司 | Display system having a plurality of light paths for performing a recess |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111971610B (en) | Head-mounted device with active optical vision focusing | |
US10495885B2 (en) | Apparatus and method for a bioptic real time video system | |
JP4373286B2 (en) | Head-mounted display device | |
JP6364715B2 (en) | Transmission display device and control method of transmission display device | |
JP5093968B2 (en) | camera | |
US20140247286A1 (en) | Active Stabilization for Heads-Up Displays | |
CA2875261C (en) | Apparatus and method for a bioptic real time video system | |
KR20230079411A (en) | Multipurpose cameras for augmented reality and computer vision applications | |
JP7435596B2 (en) | A head-mounted display system, a stereo depth camera operable to capture stereo images, and a method of providing a stereo depth camera operable to capture stereo images | |
CN209311783U (en) | Headset equipment | |
CN116420105A (en) | Low power consumption camera pipeline for computer vision mode in augmented reality goggles | |
KR20180012713A (en) | Eye-gaze detection system, displacement detection method, and displacement detection program | |
JP2005321479A (en) | Head mounted type display device | |
CN109997067A (en) | Use the display device and method of portable electronic device | |
KR20230119003A (en) | Eyewear with push-pull lens set | |
JP2008009490A (en) | Information input device | |
US20200073621A1 (en) | Systems, Devices, Components and Associated Computer Executable Code For Providing Remote Viewing of a Display Associated with a Computational Device | |
US20230185090A1 (en) | Eyewear including a non-uniform push-pull lens set | |
US11768376B1 (en) | Head-mounted display system with display and adjustable optical components | |
JP2000132329A (en) | Device and method for recognizing surface and virtual image solid synthesizer | |
JP4398298B2 (en) | Head-mounted display device | |
JP2005323000A (en) | Information display | |
KR20160149945A (en) | Mobile terminal | |
KR20240005959A (en) | Eyewear electrochromic lens with multiple tint zones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |