CN106484085B - The method and its head-mounted display of real-world object are shown in head-mounted display - Google Patents
The method and its head-mounted display of real-world object are shown in head-mounted display Download PDFInfo
- Publication number
- CN106484085B CN106484085B CN201510549225.7A CN201510549225A CN106484085B CN 106484085 B CN106484085 B CN 106484085B CN 201510549225 A CN201510549225 A CN 201510549225A CN 106484085 B CN106484085 B CN 106484085B
- Authority
- CN
- China
- Prior art keywords
- real
- image
- world object
- user
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0132—Head-up displays characterised by optical features comprising binocular systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0132—Head-up displays characterised by optical features comprising binocular systems
- G02B2027/0134—Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
A kind of method showing real-world object in head-mounted display and its head-mounted display are provided.The described method includes: (A) obtains the binocular view for the real-world object being located at around user;And (B) the virtual field-of-view image for being superimposed with the binocular view of real-world object is presented to user.According to the method and its head-mounted display, user still is able to perceive the practical three-dimensional space position of the real-world object of surrounding, the three-dimensional information such as posture and association attributes when wearing head-mounted display, to obtain the virtual visual field experience of enhancing.
Description
Technical field
All things considered of the present invention is related to head-mounted display technical field, more particularly, is related to a kind of aobvious in wear-type
Show the method for showing real-world object in device and shows the head-mounted display of real-world object.
Background technique
With the development of electronic technology, head-mounted display (HMD) is just becoming important next-generation display equipment, is giving pleasure to
The various fields such as happy, education, office have extensive practical application.When user wears head-mounted display, eyes are observed
Be the virtual visual field as constructed by the display and optical system of head-mounted display, thus can not observe true ring around
The case where real-world object in border, and such case is often made troubles to user in practical applications.
Summary of the invention
Exemplary embodiment of the present invention be to provide a kind of method that real-world object is shown in head-mounted display and
Its head-mounted display, can not be observed with to solve user when wearing head-mounted display around true object in true environment
The case where body and the problem of make troubles.
An exemplary embodiment of the present invention provides a kind of method that real-world object is shown in head-mounted display,
It include: the binocular view that (A) obtains the real-world object being located at around user;And (B) is presented to user and is superimposed with real-world object
Binocular view virtual field-of-view image.
Optionally, determine need to real-world object around user presentation user in the case where, execute step (A) and/
Or step (B).
Optionally, real-world object includes at least one among following item: object, label close to the body of user
Object, the object that user specifies, the application run on head-mounted display currently need needed for object to be used, operational controls
Object.
Optionally, it when at least one among the following conditions is satisfied, determines and needs that real-world object is presented to user: connecing
Receive user's input that real-world object is presented in request;Determine that real-world object and preset presentation object around user match;
The control for needing that operation is executed using real-world object is detected in the application interface shown in virtual field-of-view image;Detect use
The physical feeling at family is close to real-world object;Detect that the physical feeling of user is moved towards real-world object;Determine that wear-type is shown
The application run on device is currently needed using real-world object;Determine that reaching the preset real-world object with around user interacts
Time.
Optionally, the method also includes: (C) among the following conditions at least one of be satisfied when, to user present
It is not superimposed with the virtual field-of-view image of the binocular view of real-world object: receiving and terminates user's input that real-world object is presented;Really
The real-world object and preset presentation object determined around user mismatch;Do not have in the application interface shown in virtual field-of-view image
There is the control for detecting and needing to execute operation using real-world object;Determine the physical feeling of user far from real-world object;Determine head
The application run on head mounted displays is not needed using real-world object currently;Determine that user does not use during preset time period
Real-world object executes operation;Determine that user can execute operation in the case where not watching real-world object.
Optionally, step (A) includes: that the real-world object including being located at around user is captured by single filming apparatus
Image obtains the binocular view for the real-world object being located at around user according to captured image.
Optionally, in step (A), real-world object image is detected from captured image, based on the true object detected
Body image determines the real-world object image of another viewpoint, and based on the true of the real-world object image and another viewpoint detected
Real object image obtains the binocular view of real-world object.
Optionally, it is obtained based on the real-world object image of the real-world object image and another viewpoint that detect true
The step of binocular view of object includes: based on the positional relationship between the single filming apparatus and user's binocular, to detection
To real-world object image and the real-world object image of another viewpoint carry out view-point correction and obtain the binocular of real-world object
View.
Optionally, step (A) includes: the figure that the real-world object including being located at around user is captured by filming apparatus
Picture detects real-world object image from captured image, and the double of real-world object are obtained based on the real-world object image detected
Eye diagram, wherein the filming apparatus includes depth camera, alternatively, the filming apparatus includes that at least two single views are taken the photograph
As head.
Optionally, the step of binocular view of real-world object is obtained based on the real-world object image detected includes: base
Positional relationship between the filming apparatus and user's binocular carries out view-point correction to the real-world object image detected and comes
To the binocular view of real-world object.
Optionally, when can't detect the real-world object image of the desired display object among real-world object, expand shooting
The image that recapture includes desired display object is carried out at visual angle, alternatively, prompt user turns to the orientation where desired display object
Carry out the image that recapture includes desired display object.
Optionally, in step (B), the virtual scene image of reflection virtual scene is obtained, and by by real-world object
Binocular view and virtual scene image are overlapped to generate virtual field-of-view image.
Optionally, in step (B), dummy object and real-world object in virtual scene image are deposited on three-dimensional space
In the case where blocking, dummy object is zoomed in and out and/or is moved.
Optionally, in step (B), real-world object is shown as among translucent, contour line or 3D grid lines at least
One.
Optionally, in step (B), the binocular view of real-world object is added to according to one of following display mode and is worn
The virtual field-of-view image of formula display: only show the binocular view of real-world object without showing virtual scene image, only showing void
Quasi- scene image is without showing the binocular view of real-world object, by the binocular view of real-world object and virtual scene image in space
Upper fusion display shows the binocular view of real-world object on virtual scene image, in a form of picture-in-picture by virtual scene
Image is shown in a form of picture-in-picture on the binocular view of real-world object.
Optionally, in step (B), add and/or delete the virtual field-of-view image that is added to depending on the user's operation
Real-world object.
Optionally, the method also includes: (D) obtains display project about internet of things equipment, and the display that will acquire
At least one of project is added to virtual field-of-view image, wherein the following item of the display project expression thing networked devices:
Operation interface, mode of operation, notification information, instruction information.
Optionally, the display project about internet of things equipment is obtained by least one among following processing: capture
The image of internet of things equipment in the true visual field of user, and from the image zooming-out of the internet of things equipment of capture about Internet of Things
The display project of net equipment;Internet of things equipment out of the true visual field that be located at user and/or outside the true visual field is received about object
The display project of networked devices;Sensing is located at position of the internet of things equipment relative to head-mounted display outside the true visual field of user
It sets as instruction information.
Optionally, the method also includes: (E) is operated for display items purpose according to user and is held to be remotely controlled internet of things equipment
The corresponding processing of row.
In accordance with an alternative illustrative embodiment of the present invention, a kind of head-mounted display showing real-world object is provided, comprising:
Real-world object view acquisition device obtains the binocular view for the real-world object being located at around user;And display device, to user
The virtual field-of-view image for being superimposed with the binocular view of real-world object is presented.
Optionally, the head-mounted display further include: display control unit, it is determined whether need to user presentation user
Around real-world object, wherein determine the case where needing to real-world object around user presentation user in display control unit
Under, real-world object view acquisition device obtain be located at user around real-world object binocular view and/or display device to
The virtual field-of-view image for being superimposed with the binocular view of real-world object is presented in family.
Optionally, real-world object includes at least one among following item: object, label close to the body of user
Object, the object that user specifies, the application run on head-mounted display currently need needed for object to be used, operational controls
Object.
Optionally, when at least one among the following conditions is satisfied, display control unit determines that needs are in user
Existing real-world object: user's input that real-world object is presented in request is received;Determine that the real-world object around user is in preset
Existing object matches;It is detected in the application interface shown in virtual field-of-view image and needs to execute operation using real-world object
Control;Detect the physical feeling of user close to real-world object;Detect that the physical feeling of user is moved towards real-world object;Really
The application for determining to run on head-mounted display is currently needed using real-world object;Determine reach it is preset with it is true around user
The time that object interacts.
Optionally, when at least one among the following conditions is satisfied, display control unit determination is not needed to user
Real-world object is presented, the virtual field-of-view image for not being superimposed with the binocular view of real-world object is presented to user for display device: receiving
It is inputted to the user that real-world object is presented is terminated;Determine that real-world object and preset presentation object around user mismatch;?
The control for needing that operation is executed using real-world object is not detected in the application interface shown in virtual field-of-view image;It determines and uses
The physical feeling at family is far from real-world object;Determine that the application run on head-mounted display is not needed using real-world object currently;
Determine user during preset time period without executing operation using real-world object;Determine that user can not watch real-world object
In the case where execute operation.
Optionally, the head-mounted display further include: image capture apparatus, captured by single filming apparatus including
The image of real-world object around user;Binocular view generation device obtains according to captured image and is located at around user
Real-world object binocular view.
Optionally, binocular view generation device detects real-world object image from captured image, true based on what is detected
Real object image determines the real-world object image of another viewpoint, and based on the real-world object image and another viewpoint detected
Real-world object image obtain the binocular view of real-world object.
Optionally, binocular view generation device is based on the positional relationship between the single filming apparatus and user's binocular,
View-point correction is carried out to the real-world object image of the real-world object image and another viewpoint detected to obtain real-world object
Binocular view.
Optionally, the head-mounted display further include: image capture apparatus is captured by filming apparatus including being located at
The image of real-world object around user, wherein the filming apparatus includes depth camera, alternatively, the filming apparatus packet
Include at least two single view cameras;Binocular view generation device detects real-world object image from captured image, based on inspection
The real-world object image measured obtains the binocular view of real-world object.
Optionally, binocular view generation device is based on the positional relationship between the filming apparatus and user's binocular, to inspection
The real-world object image measured carries out view-point correction to obtain the binocular view of real-world object.
Optionally, it can't detect the real-world object of the desired display object among real-world object in binocular view generation device
When image, image capture apparatus expands shooting visual angle and carrys out the image that recapture includes desired display object, alternatively, image capture
The image that recapture includes desired display object is carried out in orientation where device prompt user turns to desired display object.
Optionally, the head-mounted display further include: virtual field-of-view image generation device, generation are superimposed with real-world object
Binocular view virtual field-of-view image.
Optionally, the head-mounted display further include: virtual scene image acquiring device obtains reflection virtual scene
Virtual scene image, wherein virtual field-of-view image generation device is by by the binocular view of real-world object and virtual scene image
It is overlapped to generate the virtual field-of-view image for the binocular view for being superimposed with real-world object.
Optionally, dummy object and real-world object of the virtual field-of-view image generation device in virtual scene image are in three-dimensional
Spatially exist in the case where blocking, dummy object is zoomed in and out and/or is moved.
Optionally, virtual field-of-view image generation device by real-world object be shown as translucent, contour line or 3D grid lines it
At least one of in.
Optionally, virtual field-of-view image generation device according to one of following display mode by the binocular view of real-world object with
Virtual scene image is overlapped: only showing the binocular view of real-world object without showing that virtual scene image, only display is virtual
Scene image without showing the binocular view of real-world object, by the binocular view of real-world object and virtual scene image spatially
Fusion display shows the binocular view of real-world object on virtual scene image, in a form of picture-in-picture by virtual scene figure
As being shown on the binocular view of real-world object in a form of picture-in-picture.
Optionally, virtual field-of-view image generation device adds and/or deletes the virtual view that is added to depending on the user's operation
The real-world object of wild image.
Optionally, the head-mounted display further include: display project acquisition device is obtained about the aobvious of internet of things equipment
Aspect mesh, and the display project that will acquire is added to virtual field-of-view image, wherein the display project expression thing networked devices
At least one of following item: operation interface, mode of operation, notification information, instruction information.
Optionally, display project acquisition device is obtained by least one among following processing about internet of things equipment
Display project: capture is located at the image of the internet of things equipment in the true visual field of user, and from the internet of things equipment of capture
Display project of the image zooming-out about internet of things equipment;Internet of Things out of the true visual field that be located at user and/or outside the true visual field
Net equipment receives the display project about internet of things equipment;Sensing be located at user the true visual field outside internet of things equipment relative to
The position of head-mounted display is as instruction information.
Optionally, the head-mounted display further include: control device is operated according to user for display items purpose come distant
It controls internet of things equipment and executes corresponding processing.
In the method according to an exemplary embodiment of the present invention for showing real-world object in head-mounted display and its wear
In formula display, the virtual field-of-view image of the binocular view for the real-world object being superimposed with around user can be presented to user, from
And user still is able to perceive the practical three-dimensional space position of the real-world object of surrounding, three-dimensional appearance when wearing head-mounted display
The information such as state and association attributes, the real-world object convenient for user and surrounding interact, and completion is necessary to need visual pattern anti-
The behavior of feedback.
Additionally, the embodiment of the present invention can judge to need to show to user around real-world object binocular view when
Machine;Additionally it is possible to show the binocular vision of the real-world object of surrounding to user in virtual field-of-view image by way of being suitble to
Figure.
Part in following description is illustrated into the other aspect and/or advantage of present general inventive concept, there are also one
Dividing will be apparent by description, or can learn by the implementation of present general inventive concept.
Detailed description of the invention
By below with reference to be exemplarily illustrated embodiment attached drawing carry out description, exemplary embodiment of the present it is upper
Stating will become apparent with other purposes and feature, in which:
Fig. 1 shows the stream of the method according to an exemplary embodiment of the present invention that real-world object is shown in head-mounted display
Cheng Tu;
Fig. 2 shows the sides that real-world object is shown in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
The flow chart of method;
Fig. 3 shows the side that real-world object is shown in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
The flow chart of method;
Fig. 4 shows the side that real-world object is shown in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
The flow chart of method;
Fig. 5 shows the stream of the method according to an exemplary embodiment of the present invention that physical keyboard is shown in head-mounted display
Cheng Tu;
Fig. 6 shows showing for the connection type of head-mounted display and physical keyboard according to an exemplary embodiment of the present invention
Example;
Fig. 7 shows the example according to an exemplary embodiment of the present invention for needing to present physical keyboard to user;
Fig. 8 shows the example that prompt user according to an exemplary embodiment of the present invention turns to the orientation where keyboard;
Fig. 9 shows showing for the binocular view according to an exemplary embodiment of the present invention that keyboard is obtained based on captured image
Example;
Figure 10 shows the virtual cyclogram according to an exemplary embodiment of the present invention for generating and being superimposed with the binocular view of keyboard
The example of picture;
Figure 11 shows the virtual of the binocular view according to an exemplary embodiment of the present invention for presenting to user and being superimposed with food
The example of field-of-view image;
Figure 12 shows the process of the method according to an exemplary embodiment of the present invention that food is shown in head-mounted display
Figure;
Figure 13 shows the example of operated key according to an exemplary embodiment of the present invention;
Figure 14 shows the example of gesture according to an exemplary embodiment of the present invention of finding a view;
Figure 15 shows according to an exemplary embodiment of the present invention operated by detection telecommand input to determine that user needs
The example to be fed;
Figure 16 shows the object according to an exemplary embodiment of the present invention for determining that needs are presented to user by virtual mouse
Example;
Figure 17 shows the example of display real-world object according to an exemplary embodiment of the present invention;
Figure 18 shows the real-world object according to an exemplary embodiment of the present invention for deleting the virtual field-of-view image that is added to;
Figure 19 shows the example of the binocular view of display real-world object according to an exemplary embodiment of the present invention;
Figure 20 shows the object according to an exemplary embodiment of the present invention that shows and may collide in head-mounted display
The flow chart of the method for body;
Figure 21 shows the display according to an exemplary embodiment of the present invention in head-mounted display about internet of things equipment
The flow chart of display items purpose method;
Figure 22 shows according to an exemplary embodiment of the present invention present to user and is superimposed with the virtual cyclogram of display items purpose
The example of picture;
Figure 23 shows operation circle according to an exemplary embodiment of the present invention for presenting to user and being superimposed with mobile communication terminal
The example of the virtual field-of-view image in face;
Figure 24 show it is according to an exemplary embodiment of the present invention presented to user be superimposed with mobile communication terminal carry out telecommunications
The example of the virtual field-of-view image of breath;
Figure 25 shows according to an exemplary embodiment of the present invention present to user and is superimposed with what mobile communication terminal received
The example of the virtual field-of-view image of short message;
Figure 26 shows the block diagram of the head-mounted display of display real-world object according to an exemplary embodiment of the present invention;
Figure 27 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure;
Figure 28 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure;
Figure 29 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure;
Figure 30 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure.
Specific embodiment
It reference will now be made in detail the embodiment of the present invention, examples of the embodiments are shown in the accompanying drawings, wherein identical mark
Number identical component is referred to always.It will illustrate the embodiment, by referring to accompanying drawing below to explain the present invention.
Hereinafter, will describe according to an exemplary embodiment of the present invention to show in head-mounted display in conjunction with Fig. 1 to Fig. 4
The method of real-world object.The method can be completed by head-mounted display, can also be realized by computer program.For example,
The method can be executed by the application for being used to show real-world object being mounted in head-mounted display, or by wearing
The function program realized in the operating system of formula display executes.In addition, alternately, the part step in the method
Suddenly can be completed by head-mounted display, another part step can by except head-mounted display other equipment or device assist
It completes, the invention is not limited in this regard.
Fig. 1 shows the stream of the method according to an exemplary embodiment of the present invention that real-world object is shown in head-mounted display
Cheng Tu.
As shown in Figure 1, obtaining the binocular view for the real-world object being located at around user in step S10.
Here, binocular view is to pass through true object for the binocular view of the human eye for the user for wearing head-mounted display
The binocular view of body, user's brain can obtain the depth information of real-world object, and then experience the practical three-dimensional of real-world object
Spatial position and three-dimensional posture, that is, the three-dimensional space for the real-world object that user is experienced by the binocular view of real-world object
Position and three-dimensional posture and user pass through eye-observation and the three-dimensional space position of real-world object and solid posture experienced are
It is consistent.
As an example, real-world object can be according to thingness or usage scenario and pre-set presentation object, it can
Including among following item at least one of: the object specified close to the object of the body of user, the object of label, user,
Object needed for the application run on head-mounted display currently needs object to be used, operational controls.
As an example, the image of the real-world object including being located at around user can be captured by single filming apparatus, and
The binocular view for the real-world object being located at around user is obtained according to captured image.
Here, the single filming apparatus can be common filming apparatus only with a visual angle, by what is captured
Therefore image does not have depth information correspondingly, can detect real-world object image, based on what is detected from captured image
Real-world object image determines the real-world object image of another viewpoint, and based on the real-world object image and another view detected
The real-world object image of point obtains the binocular view of real-world object.
Here, in real-world object image, that is, institute's captured image real-world object region image.For example, can be used existing
The various image-recognizing methods having from institute's captured image detect real-world object image.
As an example, can be true to what is detected based on the positional relationship between the single filming apparatus and user's binocular
Real object image and the real-world object image of another viewpoint carry out view-point correction to obtain the binocular view of real-world object.
As another example, the binocular view of real-world object can be obtained based on the stereo-picture of shooting.Particularly, may be used
The image that the real-world object including being located at around user is captured by filming apparatus, detects real-world object from captured image
Image obtains the binocular view of real-world object based on the real-world object image detected, wherein the filming apparatus includes deep
Camera is spent, alternatively, the filming apparatus includes at least two single view cameras.Here, at least two single view is taken the photograph
As head can have the visual angle of coincidence, so that can be captured by depth camera or at least two single view cameras has
The stereo-picture of depth information.
As an example, can be based on the positional relationship between the filming apparatus and user's binocular, to the true object detected
Body image carries out view-point correction to obtain the binocular view of real-world object.
It should be understood that as an example, above-mentioned single filming apparatus, depth camera or single view camera can be and wear
The built in camera of formula display is also possible to be mounted on the attachment filming apparatus on head-mounted display, for example, it may be
The camera that other equipment (for example, smart phone) have, the invention is not limited in this regard.
Preferably, when can't detect the real-world object image of the desired display object among real-world object, bat can be expanded
It takes the photograph visual angle and carrys out the image that recapture includes desired display object.
Alternatively, user can be prompted when can't detect the real-world object image of the desired display object among real-world object
The image that recapture includes desired display object is carried out in orientation where turning to desired display object.For example, image, text can be passed through
Word, audio, video etc. prompt user.As an example, can by based on pre-stored real-world object three-dimensional space position or
Orientation where prompting user to turn to desired display object via the three-dimensional space position of the real-world object of positioning device acquisition
Carry out the image that recapture includes desired display object.
In step S20, the virtual field-of-view image for being superimposed with the binocular view of real-world object is presented to user.Pass through above-mentioned side
Formula, user can watch the binocular view (that is, enhancing virtual reality) of real-world object in virtual field-of-view image, experience true
The practical three-dimensional space position of real object and three-dimensional posture accurately judges real-world object and the positional relationship of itself and really
The three-dimensional posture of object completes the necessary behavior for needing visual pattern to feed back.
Here, it should be appreciated that can be superimposed with by the display device being integrated on head-mounted display to be presented to user
The virtual field-of-view image of the binocular view of real-world object, can also by other display devices external with head-mounted display come to
User is presented, the invention is not limited in this regard.
As an example, in step S20, the virtual scene image of reflection virtual scene can be obtained, and by by true object
The binocular view and virtual scene image of body are overlapped to generate the virtual cyclogram for the binocular view for being superimposed with real-world object
Picture.That is, the virtual field-of-view image presented to user is the binocular view and virtual scene figure for spatially having merged real-world object
The virtual field-of-view image of picture, so that user can complete necessary under the experience of the virtual scene of normal head-mounted display
The behavior interacted with real-world object for needing visual pattern to feed back.
Here, virtual scene image is needs corresponding with the currently running application of head-mounted display in the virtual of user
The image of the reflection virtual scene presented in the visual field to user.For example, if the currently running application of head-mounted display is fist
It hits, the virtual somatic sensation television game such as golf, then virtual scene image is that reflection from the virtual visual field to user that need to present in is empty
The image of quasi- scene of game;If the currently running application of head-mounted display is the application for watching film, virtual field
Scape image is the image for needing the reflection virtual theater screen scene presented in the virtual visual field to user.
As an example, the binocular view of real-world object can be added to head-mounted display according to one of following display mode
Virtual field-of-view image: only show the binocular view of real-world object without showing virtual scene image, only showing virtual scene figure
Picture spatially merges without showing the binocular view of real-world object, by the binocular view of real-world object with virtual scene image aobvious
Show, show the binocular view of real-world object on virtual scene image, by virtual scene image to draw in a form of picture-in-picture
The form of middle picture is shown on the binocular view of real-world object.
As an example, real-world object can be shown as among translucent, contour line or 3D grid lines at least one of.Example
Such as, can in virtual scene image dummy object and real-world object on three-dimensional space exist block in the case where, will be true
Object be shown as among translucent, contour line or 3D grid lines at least one of, to reduce to virtual in virtual scene image
Object blocks, and reduces the influence to viewing virtual scene image.
In addition, as an example, there is screening in dummy object and real-world object in virtual scene image on three-dimensional space
In the case where gear, dummy object can also be zoomed in and out and/or be moved.For example, can in virtual scene image with true object
There is the dummy object blocked on three-dimensional space and zoom in and out and/or move in body, can also be in the presence of blocking, will be empty
All dummy objects in quasi- scene image are zoomed in and out and/or are moved.It should be understood that head-mounted display can judge automatically void
The circumstance of occlusion of dummy object and real-world object on three-dimensional space in quasi- scene image, and correspondingly dummy object is carried out
Scaling and/or movement.In addition, can also zoom in and out and/or move to dummy object depending on the user's operation.
Moreover it is preferred that the true object of virtual field-of-view image of being added to can be added and/or be deleted depending on the user's operation
Body.That is, the binocular view for the real-world object not shown in virtual field-of-view image can be also superimposed by user according to their own needs
Into virtual field-of-view image, and/or binocular vision that is extra, not needing the real-world object presented is deleted from virtual field-of-view image
Figure, to reduce the influence to viewing virtual scene image.
Preferably, the virtual visual field for being superimposed with the binocular view of real-world object can just be presented to user in appropriate circumstances
Image.Fig. 2 shows the methods that real-world object is shown in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
Flow chart.It is shown in Fig. 2 that true object is shown in head-mounted display in addition to step S10 shown in FIG. 1 and step S20
The method of body may also include step S30.Step S10 and step S20 can refer to specific embodiment above-mentioned to realize, herein not
It repeats again.
In step S30, it is determined whether need the real-world object around user presentation user, wherein determine need to
In the case where real-world object around user presentation user, step S10 and step S20 is executed.
Here, it should be appreciated that above-mentioned steps are not limited to timing shown in Fig. 2, but can according to need or product
Design carry out adjustment appropriate.For example, sustainable execution step S10 is located at the real-world object around user with real-time acquisition
Binocular view, and only step S30 determine need to real-world object around user presentation user in the case where, just execute step
Rapid S20.
As an example, in the scene that the real-world object detected around needs and user interacts, it may be determined that need
Real-world object to around user presentation user.For example, it is desired to which the scene interacted with the real-world object around user can wrap
Include following item at least one of: need by real-world object execute input operation scene (for example, it is desired to using keyboard,
Mouse, handle etc. come execute input operation scene), need avoid with real-world object collide scene (leaned on for example, it is desired to hide
The scene of close people), need to clutch the scene (for example, it is desired to the scene eaten or drunk water) of real-world object.
An exemplary embodiment of the present invention can determine the opportunity that real-world object is presented according to various situations, as
Example, among the following conditions at least one of when being satisfied, it may be determined that need the real-world object around user presentation user:
Receive user's input that real-world object is presented in request;Determine the real-world object around user and preset presentation object phase
Match;The control for needing that operation is executed using real-world object is detected in the application interface shown in virtual field-of-view image;Detection
To the physical feeling of user close to real-world object;Detect that the physical feeling of user is moved towards real-world object;Determine wear-type
The application run on display is currently needed using real-world object;Determine that the real-world object reached around preset and user carries out
The interactive time.
By the above method, it can determine whether that user sees the real-world object of surrounding in which moment needs, that is, need to user
The opportunity of the binocular view of real-world object around presenting, convenient for the practical three-dimensional space of the real-world object around user's timely learning
Between position and three-dimensional posture.
Preferably, the virtual view for presenting to user and being superimposed with the binocular view of real-world object can be terminated in appropriate circumstances
Wild image.Fig. 3 shows the side that real-world object is shown in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
The flow chart of method.It is shown in Fig. 3 to be shown really in head-mounted display in addition to step S10 shown in FIG. 1 and step S20
The method of object may also include step S40 and S50.Step S10 and step S20, which can refer to specific embodiment above-mentioned, to be come in fact
Existing, details are not described herein.
In step S40, it is determined whether need to continue to the real-world object around user presentation user, wherein determining not
In the case where needing to continue the real-world object around user presentation user, step S50 is executed.
As an example, at the end of the scene that the real-world object detected around needs and user interacts, it may be determined that
Need not continue to the real-world object around user presentation user.
As an example, determining and needing not continue to user in current when at least one among the following conditions is satisfied
Real-world object around family: it receives and terminates user's input that real-world object is presented;Determine real-world object around user and pre-
If presentation object mismatch;It does not detect and is needed using real-world object in the application interface shown in virtual field-of-view image
Execute the control of operation;Determine the physical feeling of user far from real-world object;Determine that the application run on head-mounted display is worked as
Before do not need using real-world object;Determine user during preset time period without executing operation using real-world object;It determines and uses
Family can execute operation in the case where not watching real-world object.
About request present real-world object and/or terminate present real-world object user input, as an example, can by with
Among lower item at least one of realize: contact action, physical button operation, telecommand input operation, acoustic control operation, gesture
Movement, headwork, body action, sight movement, touch action, gripping action.
About preset presentation object, the object that the needs of default setting in head-mounted display are presented can be, it can also
To be user's object that set needs are presented according to their own needs.For example, preset presentation object can be food, meal
Tool, the hand of user, the object for posting specific label, people etc..
In step S50, the virtual field-of-view image for not being superimposed with the binocular view of real-world object is presented to user.
By the above method, can be in user in the case where not needing the real-world object around user presentation user
It is not superimposed with the virtual field-of-view image (that is, virtual scene image only is presented to user) of the binocular view of real-world object, now with not
It influences user and watches virtual scene image.
Fig. 4 shows the side that real-world object is shown in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
The flow chart of method.It is shown in Fig. 4 to be shown really in head-mounted display in addition to step S10 shown in FIG. 1 and step S20
The method of object may also include step S60.Step S10 and step S20 can refer to specific embodiment above-mentioned to realize, herein
It repeats no more.
In step S60, the display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual view
At least one of wild image, wherein the following item of the display project expression thing networked devices: operation interface, operation shape
State, notification information, instruction information.
Here, notification information can be the information such as text, audio, video, image.For example, if internet of things equipment is logical
Believe equipment, then notification message can be the text information about missed call;If internet of things equipment is access control equipment, notify
Message can be captured monitoring image.
Indicate that information is to be used to indicate user to find the information such as text, audio, video, the image of internet of things equipment.For example,
Instruction information can be arrow indicator, and user can obtain orientation of the internet of things equipment relative to user according to the direction of arrow;
Indicate that information is also possible to indicate the text of the relative position of user and internet of things equipment (for example, communication equipment is in your left front
At two meters).
As an example, the display project about internet of things equipment can be obtained by least one among following processing:
Capture be located at user the true visual field in internet of things equipment image, and from the image zooming-out of the internet of things equipment of capture about
The display project of internet of things equipment;Internet of things equipment out of the true visual field that be located at user and/or outside the true visual field, which receives, to close
In the display project of internet of things equipment;Sensing is located at the internet of things equipment outside the true visual field of user relative to head-mounted display
Position as instruction information.
In addition, the method for showing real-world object in head-mounted display in accordance with an alternative illustrative embodiment of the present invention
It may also include that and operated for display items purpose according to user to be remotely controlled internet of things equipment and execute corresponding processing.
By the above method, user can know the related letter of the internet of things equipment of surrounding when using head-mounted display
Breath, in addition, also remote-controlled internet of things equipment executes corresponding processing.
Hereinafter, the specific implementation that the above method will be described in conjunction with specific application scenarios, it should be appreciated that following tools
Body implementation is not limited to an other application scenarios, can apply in different scenes completely, about based on different scenes
Described specific implementation can also merge use each other, with no restriction to this.
Hereinafter, by the application scenarios for showing physical keyboard in head-mounted display are described in conjunction with Fig. 5 to Figure 10.It should
Understand, although following application scenarios by taking physical keyboard as an example, are equally applicable to similar interactive device, for example, mouse, remote control
Device, game paddle etc..
Fig. 5 shows the stream of the method according to an exemplary embodiment of the present invention that physical keyboard is shown in head-mounted display
Cheng Tu.Here, the Overall Steps in the method can be executed by head-mounted display, and in this case, wear-type is shown
Device can be connect by wired or wireless mode with physical keyboard.
In addition, method shown in fig. 5 can also be executed a part of step by head-mounted display, except head-mounted display
Processor execute another part step.Fig. 6 shows head-mounted display and physics according to an exemplary embodiment of the present invention
The example of the connection type of keyboard.As shown in fig. 6, head-mounted display and physical keyboard can by wired or wireless mode with
Processor is connected, that is, a part of step in the method is executed by head-mounted display, and another part step is by wearing
Processor except formula display executes.
Particularly, as shown in figure 5, in step S101, it is determined whether need the secondary or physical bond around user presentation user
Disk.
The physical keyboard around user presentation user can be determined the need for according to following implementation.As showing
, it is detected in the application interface shown in virtual field-of-view image and needs the case where executing the control of operation using real-world object
Under, it may be determined that need the physical keyboard around user presentation user.For example, answering of showing in virtual field-of-view image can be read
With the attribute information of all controls in interface, and determined whether according to the attribute information of reading in need to use interactive device
Execute operation control, determine it is in need by interactive device execute operation control in the case where, it may be determined that need to
Interactive device around the presentation user of family, and physical keyboard can be set to the interactive device for needing to present to user.Fig. 7 is shown
The example according to an exemplary embodiment of the present invention for needing to present physical keyboard to user.As shown in fig. 7, if in the virtual visual field
The control for needing to execute input operation using interactive device is detected in the application interface shown in image, such as (a) in Fig. 7
Shown in prompt input text information dialog box, prompt shown in (b) in Fig. 7 clicks the dialog box to start, then can be true
The physical keyboard around user presentation user is needed, calmly so that user executes corresponding input operation.
As another example, the case where the application run on determining head-mounted display is currently needed using physical keyboard
Under, it may be determined that need the physical keyboard around user presentation user.Here, friendship to be used can currently be needed according to using determination
Mutual equipment.For example, the application run on head-mounted display is virtual game application, it is current to need to control game by keyboard
Role then can determine needs to the physical keyboard around user presentation user.Also, determining needs can be presented to user
Physical keyboard is added in preset presentation object list.
As another example, receive request present physical keyboard user input in the case where, it may be determined that need to
Physical keyboard is presented in user.Here, user's input can be contact action, physical button operation, telecommand input operation, sound
Control operation, gesture motion, headwork, body action, sight movement, touch action, gripping action etc..
About physical button operation, contact action and telecommand input operation, can be on head-mounted display
The operation of physical button is also possible to the operation to the key on touch screen, and can also be, which can be remotely controlled wear-type to other, shows
Show the input operation of the equipment (for example, handle etc.) of device.For example, when detecting the message event of physical button a, it may be determined that need
Physical keyboard is presented to user;When detecting the message event of physical button b, it may be determined that do not need that physics is presented to user
Keyboard.In addition, can also switch by the operation to same physical button whether need that physical keyboard is presented.
About gesture operation, it can be detected by filming apparatus and be used to indicate the user gesture for needing to present physical keyboard come really
It needs that physical keyboard is presented to user calmly.For example, when detecting the gesture a for being used to indicate and needing to present physical keyboard, it can be true
It needs that physical keyboard is presented to user calmly;When detecting the gesture b for being used to indicate and not needing that physical keyboard is presented, it may be determined that no
It needs that physical keyboard is presented to user.In addition, can also switch by same gesture operation whether need that physical keyboard is presented.
About headwork, body action, sight movement, it can be used to indicate by filming apparatus detection and need that physics is presented
User's posture of keyboard to determine needs that physical keyboard is presented to user, for example, end rotation or direction of visual lines.For example, when inspection
When measuring the sight of user and meeting condition a, it may be determined that need to present physical keyboard to user (for example, the sight of user is seen to void
The dialog box of prompt input text information in quasi- field-of-view image);When the sight for detecting user meets condition b (for example, with
The sight at family sees dummy object, virtual movie screen into virtual field-of-view image), it may be determined that it does not need that physics is presented to user
Keyboard.Wherein, condition a and condition b can be complementary condition, be also possible to not complementary condition.
About acoustic control operation, the sound of user can be acquired by microphone, and user is identified by speech recognition technology
Phonetic order, it is determined whether need to user present physical keyboard.
As another example, in the case where detecting in one's hands be placed on above physical keyboard, it may be determined that need to present to user
Physical keyboard.For example, whether can be detected around user by filming apparatus has hand (for example, skin color detection method can be used), is
It is no to have keyboard, hand whether on keyboard, when above three condition all meets, it may be determined that need that physical keyboard is presented to user;
When there is a condition to be unsatisfactory in above three condition, it may be determined that do not need that physical keyboard is presented to user.It can be simultaneously or suitable
Whether there is hand around sequence detection user, whether have keyboard, and whether has hand around unlimited regular inspection survey user, whether has keyboard
Sequencing further detects hand whether on keyboard when detecting around user has hand and keyboard.
In the case where step S101 determines that keyboard is presented to user in needs, step S102 is executed.In step S102, pass through
Filming apparatus captures the image around user, and keyboard image is detected from captured image.
As an example, characteristic point can be detected in captured image, and by the characteristic point that will test and it is stored in advance
Keyboard image Feature Points Matching detect keyboard image.For example, can according in captured image with pre-stored keyboard layout
The coordinate of the image coordinate of the characteristic point to match as characteristic point and the keyboard image characteristic point stored in advance determines the figure of capture
The image coordinate of four angle points of keyboard, is then based on the image coordinate of four determining angle points to determine captured image as in
The profile of middle keyboard, and then determine the keyboard image in captured image.Here, characteristic point can be Scale invariant features transform
(Scale-invariant Feature Transform, SIFT) characteristic point, is also possible to known in those skilled in the art
Other characteristic points.Correspondingly, same or similar method can be used to calculate the profile point of arbitrary objects in institute's captured image
The image coordinate of (that is, point on the profile of object).Moreover, it should be understood that can also come by other means from captured image
Middle detection keyboard image.
Below by taking four angle points of keyboard as an example, the calculating process of the profile of keyboard in captured image is illustrated.It is false
If the coordinate of pre-stored keyboard image characteristic point is Pworld(under the local coordinate system where keyboard), pre-stored key
The coordinate of top left corner apex on the profile of disk image is Pcorner(under the local coordinate system where keyboard), captured image
In the coordinate of characteristic point that matches with pre-stored keyboard image characteristic point be Pimage, the local coordinate system where keyboard arrives
The transformation of the coordinate system of filming apparatus is indicated with R and t, wherein R indicates rotation, and t indicates displacement, the projection matrix of filming apparatus
For K, then have
Pimage=K* (R*Pworld+ t) (1),
By the coordinate of the characteristic point in the coordinate of pre-stored keyboard image characteristic point and the captured image to match
Formula (1) is substituted into respectively, can solve to obtain R and t, the coordinate that the top left corner apex of keyboard in captured image then can be obtained is K*
(R*Pcorner+ t), correspondingly, the coordinate of other three angle points of keyboard in captured image can be obtained, line, which can be obtained, catches
The profile of keyboard in the image obtained.Correspondingly, coordinate of the profile point of any object in captured image can be calculated, thus
To the profile of the projection of the object on the captured image.
In addition, if not detecting keyboard image from captured image, then it can expand shooting visual angle (for example, using
Wide viewing angle filming apparatus) image that comes around recapture user and keyboard image is therefrom detected, alternatively, prompt user turns to key
The image that recapture includes keyboard is carried out in orientation where disk, for example, using detecting and remembering in previous captured image
The location information or wireless location mode for recalling the keyboard of storage are (for example, Bluetooth transmission, radio frequency identification (Radio Frequency
Identification, RFID) modes such as label, infrared ray, ultrasonic wave, magnetic field) determine the orientation where keyboard.Fig. 8 shows
Prompt user according to an exemplary embodiment of the present invention turns to the example in the orientation where keyboard out.As shown in figure 8, can be virtual
Direction of superposition instruction image (for example, arrow) is in field-of-view image to indicate user field of fixation direction in the direction.
In step S103, based on the positional relationship between the filming apparatus and user's binocular, to the keyboard layout detected
The binocular view of keyboard is obtained as carrying out view-point correction.For example, according to the coordinate of the coordinate system of filming apparatus and user's binocular
Rotation and translation relationship between system carries out homograph to the keyboard image detected, to obtain the binocular view of keyboard.
Rotation and translation relationship between the coordinate system of filming apparatus and the coordinate system of user's binocular can be demarcated by offline mode,
Also it can be read and use nominal data provided by producer.
As an example, can be based on if the filming apparatus used in step S102 is single single view filming apparatus
The keyboard image detected determines the keyboard image of another viewpoint, is then based between the single filming apparatus and user's binocular
Positional relationship, view-point correction is carried out to the keyboard image of the keyboard image and another viewpoint detected to obtain keyboard
Binocular view.
Here, since used filming apparatus is single single view filming apparatus, so the keyboard image detected is only
With a viewpoint, need to be converted to keyboard image into the stereo-picture with depth information.For this reason, it may be necessary to from viewpoint
Keyboard image, which sets out, synthesizes the keyboard image of another viewpoint by calculating, to obtain Three-dimensional keyboard image.For example, for key
Disk can model it with a planar rectangular, calculate its position and posture in three dimensions.It particularly, can root
Position and posture of the keyboard in the three-dimensional system of coordinate of single view filming apparatus are sought according to homograph relationship, when single view is shot
Keyboard it is known that left-eye view and the right side of human eye can be projected to respectively by the rotation and translation parameter of two viewpoints of device and human eye
In eye view, so that the binocular image being added in virtual field-of-view image be enabled to have three-dimensional sense, formation can correctly convey keyboard
Actual physical location visual cues.The object more complicated for shape, available segment areal model is to body surface shape
Shape carries out approximation, is then estimated using similar method its position and posture, and regenerates object by projection
Binocular view.
Below by taking the keyboard image with a viewpoint as an example, the calculating process of the binocular view of keyboard is illustrated.Key
The three-dimensional coordinate (under the local coordinate system where keyboard) of characteristic point on disk is known, for example, can by measuring in advance,
Or multiple images of shooting different angle, three-dimensional reconstruction then, which is carried out, using Stereo Vision obtains.Assuming that on keyboard
The three-dimensional coordinate of characteristic point is P under the local coordinate system where keyboardobj, the coordinate of characteristic point on keyboard in filming apparatus
Coordinate in system is Pcam, the rotation and translation of the coordinate system of local coordinate system where keyboard to filming apparatus is respectively R, t,
The right and left eyes coordinate system of user is respectively R relative to the rotation and translation of the coordinate system of filming apparatusl, tl, Rr, tr, on keyboard
Subpoint in the corresponding captured image of characteristic point is Pimg, the Intrinsic Matrix K of filming apparatus can be by demarcating in advance
It arrives.
Using the subpoint constraint observed, solution obtains R and t:
Pimg=K*Pcam=K* (Pobj* R+t) (2),
The then projection equation of left-eye image are as follows:
Pleft=K* (Pobj*Rl+tl) (3),
Due to PobjIn one plane, so PimgAnd PleftMeet homograph, therefore, transformation matrix H can be found out, it is full
Sufficient Pleft=H*Pimg, according to transformation matrix H, the keyboard image I that can will testcamBe converted to the image I that left eye is seenleft.Phase
Ying Di same method can be used to obtain eye image.
Fig. 9 shows showing for the binocular view according to an exemplary embodiment of the present invention that keyboard is obtained based on captured image
Example.As shown in figure 9, detecting keyboard image from captured image, then the keyboard image determination key based on the single view detected
Disk position in three dimensions and posture, are then based on keyboard position in three dimensions and posture determines another viewpoint
Keyboard image, then based on the positional relationship between filming apparatus and user's binocular, to the keyboard image for detecting single view and really
The keyboard image of fixed another viewpoint carries out view-point correction to obtain the binocular view of keyboard, is superimposed with the double of keyboard to obtain
The virtual field-of-view image of eye diagram.
As another example, if the filming apparatus used in step S102 is depth camera or at least two
Single view camera, can based on the positional relationship between the filming apparatus and user's binocular, to the keyboard image detected into
Row view-point correction obtains the binocular view of keyboard.
When the filming apparatus is at least two single view camera, key can be obtained by least two camera
It position of the 3D rendering and keyboard of disk relative to the filming apparatus then can be according to keyboard relative to the filming apparatus
Position and the filming apparatus and user's binocular between positional relationship, the 3D rendering of keyboard is projected into binocular view
In.
When the filming apparatus be depth camera when, can by the depth camera obtain keyboard 3D rendering and
Position of the keyboard relative to the depth camera, then, can according to keyboard relative to the position of the depth camera and
Positional relationship between the depth camera and user's binocular projects the 3D rendering of keyboard in binocular view.
In step S104, the virtual field-of-view image for being superimposed with the binocular view of keyboard is presented to user.As an example, can be first
Obtain the virtual scene image of reflection virtual scene, and by by the binocular view of keyboard and virtual scene image be overlapped come
Generate the virtual field-of-view image for being superimposed with the binocular view of keyboard.Figure 10 shows according to an exemplary embodiment of the present invention generate and folds
Added with the example of the virtual field-of-view image of the binocular view of keyboard.As shown in (a) in Figure 10, user is captured by filming apparatus
The image of surrounding carries out view-point correction to the keyboard image detected from captured image and comes as shown in (b) in Figure 10
To the binocular view of keyboard, as shown in (c) in Figure 10, the virtual scene image of reflection virtual scene is obtained, in Figure 10
(d) shown in, the binocular view of keyboard and virtual scene image are overlapped to generate virtual field-of-view image and be exported to user.
In step S105, it is determined whether need to continue that keyboard is presented to user.
As an example, in the case where detecting that keyboard use terminates, it may be determined that need not continue to that keyboard is presented to user.
It can judge whether user terminates keyboard use by following implementation.
As an example, when detecting that user does not use keyboard to input during predetermined amount of time, it may be determined that use
Family terminates keyboard use.It should be understood that the keyboard input condition of sustainable detection user, when there is of short duration pause, not really
Being set to user terminates keyboard use, only when detecting that user interrupts using keyboard input and is more than predetermined amount of time, just really
Being set to user terminates keyboard use.Here, the predetermined amount of time can be arranged automatically by head-mounted display, can also be by user certainly
Definition setting, for example, the predetermined amount of time may be configured as 5 minutes.
As another example, since user is when executing input operation using keyboard, hand will not be too far with a distance from keyboard,
So when the distance between hand and keyboard for detecting user is more than to make a reservation for using distance threshold, it may be determined that user's end key
Disk uses.For example, when the distance between both hands and keyboard for detecting user are more than first using distance threshold, it may be determined that
User terminates keyboard use.In some cases, it is far to leave keyboard for the hand of user, single another hand also on keyboard,
Also no longer needed at this time using keyboard, therefore, when detect the one hand of user and the distance between keyboard more than the second use away from
When from threshold value, it may be determined that user terminates keyboard.Wherein, first is possible identical using distance threshold using distance threshold and second,
May also be different, it can be arranged automatically by head-mounted display, also can be customized by users setting, also, use both hands and keyboard
The distance between or singlehanded and the distance between keyboard mode measure, can be arranged automatically by head-mounted display, can also be by
The customized setting of user.
As another example, when detecting the user's input for terminating and keyboard being presented, it may be determined that user terminates keyboard use.
For example, user, which presses the input modes such as specific button notice head-mounted display, terminates display keyboard.
As another example, when the application run on head-mounted display is not needed using keyboard currently, it may be determined that use
Family terminates keyboard use.It does not need to hold using keyboard for example, detecting in the application interface shown in virtual field-of-view image
The control (for example, the text input dialogue frame shown in virtual field-of-view image disappears) of row operation;Alternatively, detect need using
The application of keyboard is turned off.
It, can be according to cutting it should be understood that if detecting that user is switched to other application during inputting using keyboard
The concrete condition for the application changed to, redefines whether user needs to determine whether to need to present to user using keyboard
Keyboard.
In the case where determination needs not continue to that keyboard is presented to user, step S106 is executed.In step S106, to user
The virtual field-of-view image for not being superimposed with the binocular view of keyboard is presented, for example, virtual scene image only is presented to user.
It should be understood that the above-mentioned specific embodiment by taking keyboard as an example, is equally applicable to handle (for example, using wear-type
Display carries out used game paddle when virtual game).The running feelings of the application run on detectable head-mounted display
Condition, when the application of operation currently needs to operate using handle, whether detectable user holds handle, if detecting user's mesh
Before hold handle, then only to user present virtual scene image;If detecting that user currently without handle is held, can pass through
Filming apparatus captures the image around user, and handle is detected from captured image.
It can detect whether user holds handle by following implementation.As an example, all due to general environment temperature
Lower than human body temperature, manpower humidity will be typically higher than ambient humidity, therefore can pass through the temperature and/or humidity around detection handle
To determine whether user holds handle.Here, temperature sensor and/or humidity sensor can be set in the handle to measure surrounding
The temperature and/or humidity of environment.Can according to the temperature and/or humidity of ambient enviroment and preset temperature threshold that measurement obtains and/
Or whether default humidity threshold relatively determines handle in user hand.
As another example, it can determine whether user holds handle by detecting the motion conditions of handle.It here, can be
Motion sensor (for example, gyroscope, inertia accelerometer etc.) is set in handle, it can be by analyzing the intensity moved and duration
To determine handle whether in user hand.
It as another example, is conductor that can be conductive due to containing moisture in human body, so detection electricity can be passed through
Stream and/or inductance determine whether user holds handle.Here, conductive material can be installed in handle surfaces, pacifies in handle surfaces
After loading electrode, resistance can be estimated by the size of current between measuring electrode, and then determines handle whether in user hand;
Alternatively, can judge whether the electrode is connected with human body by measuring the inductance of single electrode.
When not detecting handle from captured image, it can prompt there is no handle around user.In this case,
User may need to stand up to find handle, therefore can further ask the user whether to stand up to find handle.If the user determine that
It needs to stand up to find handle, then the binocular view of the real-world object of surrounding can be presented to user, user is facilitated to be immediately seen surrounding
Environment;If the user determine that not needing to stand up to find handle, then can be switched into application without true handle mode of operation.When
User finds handle and is held when starting with middle, virtual scene image only can be presented to user;When user does not find handle simultaneously
When abandoning finding handle, virtual scene image only can be presented to user.
When detecting handle from captured image, can determine whether its whether user true field range (that is, user
Do not wear field range when head-mounted display currently) in.If can be presented to user in the true field range of user
It is superimposed with the virtual field-of-view image of the binocular view of handle;If user can not prompted current in the field range of user
Within sweep of the eye without handle, and the orientation where user's steering tiller can be prompted further to make handle enter the true of user
In fact within sweep of the eye.For example, user can be prompted by image, text, audio, video etc..The positioning method of handle can be used
With positioning method as keyboard type, for example, can by wireless signal detect etc. modes come position fixing handle.
As an example, can display the prompt box in virtual field-of-view image, handle can be shown not in field range in prompting frame
It is interior, handle is searched so that user adjusts visual angle, can also be mentioned in prompting frame further according to the relative position of handle and user
Show how user adjusts visual angle, so that user be helped to be quickly found out handle.As another example, auditory tone cues user hand can be passed through
How handle within sweep of the eye, further can not also be adjusted according to the relative position of handle and user by auditory tone cues user
Whole visual angle.As another example, arrow indicator can be shown in virtual field-of-view image, handle is indicated by the direction of arrow
The orientation at place, further, while can also showing arrow indicator in virtual field-of-view image, by virtual cyclogram
The angle and/or the distance between handle and user for showing text as in or being rotated by auditory tone cues user.
By the above method, interactive device is found when wearing head-mounted display convenient for user and is held using interactive device
Row input operation.
Hereinafter, by the application scenarios fed when wearing head-mounted display are described in conjunction with Figure 11 to Figure 19.Figure 11 shows
It is according to an exemplary embodiment of the present invention out that the example for being superimposed with the virtual field-of-view image of binocular view of food is presented to user.
As shown in figure 11, the virtual field-of-view image for being superimposed with the binocular view of food is presented to user, so that user can use head
The movement of feed is completed when head mounted displays.It should be understood that although following application scenarios are applied equally to by taking food as an example
Other similar objects.
Figure 12 shows the process of the method according to an exemplary embodiment of the present invention that food is shown in head-mounted display
Figure.Here, the Overall Steps in the method can be executed by head-mounted display, can also be shown by wear-type and be executed a part
Step is executed another part step by the processor except head-mounted display.
If Figure 12 shows, in step S201, determine whether user needs to feed.
It can determine whether user needs to feed according to following implementation.As an example, detecting predetermined key behaviour
In the case where work, it may be determined that user needs to feed.Here, operated key can be the hardware on head-mounted display by
Key, the key being also possible on the display screen of head-mounted display, when detecting that user presses predetermined key in a predefined manner,
It can determine that food and/or drink is presented to user in needs.Operated key can also be virtual key, that is, in the virtual visual field
A virtual interface is shown in image, is judged by the interaction scenario of detection user and this virtual interface.Wherein, institute
State predetermined way can be among following manner at least one of: short-press, long-pressing, short-press pre-determined number, short-press and long-pressing alternating
Carry out etc..Figure 13 shows the example of operated key according to an exemplary embodiment of the present invention.As shown in figure 13, operated
Key can be the key on the display screen of hardware button, head-mounted display on head-mounted display, the void in virtual interface
Quasi- key.
As another example, in the case where detecting the prearranged gesture of user, it may be determined that user needs to feed.It is described pre-
Determining gesture can be the gesture of singlehanded completion, be also possible to the gesture of both hands completion, and the particular content of the prearranged gesture can be with
Be following gesture at least one of item: wave, hand draws circle, hand draws square, hand draws triangle, gesture of finding a view etc..Figure 14 shows
The example of gesture according to an exemplary embodiment of the present invention of finding a view out can take as shown in figure 14 according to what gesture of finding a view was irised out
Scape range determines the object needed in the viewfinder range that presents to user.Existing gesture detecting devices can be used detecting and
Identify particular content indicated by gesture.
As another example, in the case where detecting the object for posting specific label around user, it may be determined that need
To the object for posting specific label described in user's presentation.Here, it is in need to user present object can uniformly post it is identical
Specific label, alternatively, the object that different classes of needs are presented to user can post different classes of specific label in order to
Different classes of object is identified, for example, the 1st class label can be attached on desk to identify desk, it can be by the 2nd class label
It is attached on chair to identify chair, the 3rd class label can be attached on tableware to mark tableware, when detecting posting around user
When the object of the 3rd class label patch, it may be determined that user needs to feed.It can detect and identify by various methods specific label.
As another example, in the case where detecting the arrival preset meal time, it may be determined that user needs to feed.Head
Head mounted displays can automatic preset mealtimes, for example, the time that default breakfast starts is 7:30, the time that lunch starts is
12:00, the time that dinner starts are 6:00.Since the meal time of each user may be different, can also by user according to
The meal time is arranged in the habit of oneself, for example, the time that user settable breakfast starts is 8:00, the time that lunch starts is
12:30, the time that dinner starts are 6:00.When existing simultaneously head-mounted display preset meal time and user preset automatically
Meal time when, the mode that priority can be used is judged, for example, if the priority of the meal time of user preset is high
It, then can be only when reaching the meal time of user preset, just in the priority of head-mounted display preset meal time automatically
It can determine that user needs to feed.In addition, can also head-mounted display preset meal time and user preset automatically dining
Time is all responded.
As another example, the real-world object of surrounding can be identified, determines the type of real-world object, detect food
In the case where at least one in object, drink and tableware, it may be determined that user needs to feed.For example, image-recognizing method can be passed through
To detect food, drink and tableware etc. from the image around the user of capture, in addition, can also be by other methods come to user
Food, drink and the tableware of surrounding are identified.
As another example, it is detected during preset meal time section in food, drink and the tableware around user
At least one of in the case where, it may be determined that user needs to feed.Here, meal time section can be preset, for example, default breakfast
Period is 7:00-10:00, and lunchtime section is 11:00-14:00, and date for dinner section is 17:00-20:00, meal time section
(for example, default setting when factory) can be set automatically by head-mounted display, it can also be by user setting.If wear-type is shown
Automatically when preset meal time section and the meal time section of user preset exist simultaneously, the mode that priority can be used carries out device
Judgement, for example, if the priority of the meal time section of user preset is higher than head-mounted display preset meal time automatically
Section priority, then can only reach user preset meal time Duan Shicai response, in addition, can also head-mounted display from
The meal time section for moving preset meal time section and user preset is all responded.Can during preset meal time section,
At least one whether having in food, drink and tableware around user is detected, by the methods of image recognition to determine user
Whether need to feed.
As another example, the default gesture of user or the feelings of predetermined gesture are detected during preset meal time section
Under condition, it may be determined that user needs to feed.Here, meal time section can be preset, for example, default breakfast time section is 7:00-
10:00, lunchtime section are 11:00-14:00, and date for dinner section is 17:00-20:00, and meal time section can be shown by wear-type
Show that (for example, default setting when factory) is arranged in device automatically, it can also be by user setting.If head-mounted display is automatically preset
When meal time section and the meal time section of user preset exist simultaneously, the mode that priority can be used is judged, for example, such as
The priority of the meal time section of fruit user preset is higher than the priority of head-mounted display preset meal time section automatically, then
It can be only in the meal time Duan Shicai response for reaching user preset, in addition, can also head-mounted display preset dining automatically
Period and the meal time of user preset section are all responded.The prearranged gesture can be the gesture of singlehanded completion, can also
Be both hands complete gesture, the particular content of the prearranged gesture can be among following gesture at least one of: wave, hand
Draw circle, hand draws square, hand draws triangle, gesture of finding a view etc..The predetermined gesture can be at least one among following posture:
Rotary head, body "Left"-deviationist, body Right deviation etc..Existing gesture detecting devices or posture detecting devices can be used to detect and identify hand
The particular content of gesture or posture.
As another example, in the case where detecting the input operation of preset telecommand, it may be determined that user need into
Food.Particularly, it can judge whether user needs to feed by detecting the telecommand that user inputs in other equipment.This
In, the other equipment may include at least one among following equipment: mobile communication terminal (for example, smart phone), individual
Tablet computer, personal computer, peripheral hardware keyboard, wearable device, handle etc..Wearable device may include among following item extremely
One item missing: Intelligent bracelet, smartwatch etc..The other equipment can be connected by wired or wireless mode and head-mounted display
It connects, wherein wirelessly can include: bluetooth, ultra wide band, ZigBee (" ZigBee protocol "), Wireless Fidelity (wireless
Fidelity, Wi-Fi), macro network etc..In addition, the telecommand is also possible to ultra-red order etc..Figure 15 is shown according to this hair
The example for determining user's needs by telecommand input operation and feeding of bright exemplary embodiment.As shown in figure 15, can lead to
It crosses and inputs telecommand on smart phone to determine that user needs to feed.
As another example, determine whether user needs to feed according to the acoustic control operation detected.Microphone can be passed through
The sound or other audible signals for acquiring user identify phonetic order or the acoustic control instruction of user by speech recognition technology,
So that it is determined that whether user needs to feed.For example, head-mounted display receives if user issues acoustic control instruction " starting to have a meal "
Acoustic control instruction simultaneously instructs progress speech recognition to the acoustic control, so that it is determined that acoustic control instruction is for determining that user needs to feed
Instruction.Acoustic control instruction can be stored in advance in head-mounted display and indicate the corresponding relationship for the instruction that user needs to feed,
For example, the form that corresponding table can be used is shown, show following acoustic control instruction is corresponding with the instruction that user needs to feed:
Chinese and English instruction or other sound instructions such as " starting to show food ", " starting to show dining table ".It should be understood that acoustic control instruction is unlimited
In above-mentioned example, it is also possible to pre-set other acoustic controls instruction of user, as long as user and head-mounted display both know about this
Acoustic control instruction corresponds to determine the acoustic control instruction that user needs to feed.
In the case where step S201 determines that user needs to feed, step S202 is executed.In step S202, determine need to
The real-world object that user is presented.Here, the real-world object presented to user is needed to can be food, drink, tableware, hand, dining table
Deng.
The object for needing to present to user can be determined by following implementation.As an example, head-mounted display can
It is previously stored with the image of all kinds of articles (for example, food), it can be by the image of the real-world object detected from captured image
It is matched with the image of pre-stored food, if successful match, shows to detect from captured image true
It include food in object, to can determine that the real-world object detected from captured image is the object for needing to present to user
Body.In some cases, user may want to display real-world object as few as possible, therefore, in above-mentioned implementation, if
It has determined that the real-world object detected from captured image includes food, then food can be separated with other real-world objects,
Food is only determined as the object for needing to present to user, without other real-world objects are presented to user.Further, since hand
Portion and the relative position of food are critically important for correctly grabbing, can also be by various algorithms to the hand images in captured image
It is detected, it, can be by hand also as the object for needing present to user if detecting hand.
As another example, can be labeled on different classes of object instruction respective classes to different classes of object
Body is identified, and head-mounted display can identify label.For example, the 1st class label can be attached on desk to identify table
2nd class label can be attached on chair to identify chair, the 3rd class label can be attached on tableware to mark tableware by son.Work as detection
When to the 3rd class label, it may be determined that need the real-world object around user presentation user.In some cases, user may wish
Display real-world object as few as possible is hoped, therefore, in above-mentioned implementation, if it have detected that the 3rd class label, then can incite somebody to action
The object for posting the 3rd class label is separated with other real-world objects, only is determined as needing to user by the object for posting the 3rd class label
The object of presentation, without other real-world objects are presented to user.Further, since hand and food and/or drink is opposite
Position is critically important for correctly grabbing, and can also be detected by various algorithms to the hand images in captured image, if
Detect hand, the object that hand can be also presented as needs to user.
As another example, can by detection user some region of prearranged gesture come determine need or do not need to
The real-world object that family is presented.The prearranged gesture can be at least one among following gesture: wave, hand draws circle, hand draws side
Block, hand draw triangle, gesture of finding a view etc..For example, can determine only needs if detecting the gesture that user draws a circle in a certain region
The object drawn a circle in range is presented to user;If detecting the gesture of finding a view of user, can determine only need be in user
The object that gesture of now finding a view is irised out;If detect that user hand draws the gesture of triangle, it can determine that needs are presented to user
All objects on dining table;If detecting that user hand draws the gesture of square, it can determine and only need that food is presented to user;Such as
Fruit detects the gesture that user waves on real-world object, then can determine and do not need that the real-world object is presented to user.
As another example, it can be determined according to the phonetic order detected and need or do not need to the true of user's presentation
Object.For example, can determine that dining table and meal is presented to user in needs if detecting that phonetic order is " display dining table and food "
All foods on table;If detecting that phonetic order is " only display food ", it can determine and only need that food is presented to user;
If detecting that phonetic order is " not showing dining table ", it can determine and not need that dining table is presented to user.
As another example, can be determined in such a way that phonetic order and tag recognition combine needs or do not need to
The object that user is presented.For example, the 1st class label can be attached on desk to identify desk, the 2nd class label can be attached on chair
Chair is identified, the 3rd class label can be attached on tableware to mark tableware, head-mounted display can know above-mentioned label
Not, when detecting phonetic order is " the only object that the 3rd class label is posted in display ", it may be determined that only need to present to user and eat
Object;When detecting that phonetic order is " object that the 3rd class label and the 1st class label is posted in display ", it may be determined that need to presentation
Food and dining table;When detecting that phonetic order is the object of the 1st class label " do not show post ", it may be determined that do not need be in user
Existing dining table.
As another example, it can be determined by detecting the telecommand of other equipment and need or do not need to present to user
Object.Wherein, the other equipment may include at least one among following item: wearable device, mobile device etc.,
In, wearable device can be at least one among following item: Intelligent bracelet, smartwatch etc..Telecommand may include needing
The title for the object to be presented and/or the title for not needing the object presented.
As another example, virtual mouse can be shown in virtual field-of-view image, by the user that detects to virtual mouse
Target operation, it may be determined that need or do not need the object presented to user.Figure 16 shows according to an exemplary embodiment of the present invention
The example for needing the object presented to user is determined by virtual mouse.As shown in figure 16, user can operate virtual mouse in void
Choose certain a few article in quasi- field-of-view image, detect virtual mouse choose operation after, it may be determined that virtual mouse was chosen
Article is the object for needing to present to user.
In step S203, the image around user is captured by filming apparatus, from captured image detection need to
The image for the real-world object that user is presented, and the binocular vision of real-world object is obtained based on the image of the real-world object detected
Figure.
Here, the filming apparatus can be single single view camera, be also possible to binocular camera, depth camera
Deng.When filming apparatus is binocular camera or depth camera, detection needs to present to user true from captured image
When the image of real object, it not only can use and the characteristics of image of the real-world object presented to user needed to detect, it can also benefit
The image-region where it is determined with the depth information of the real-world object presented to user is needed.In addition, can also be from capture
Hand images are detected in image, to provide a user more complete visual feedback.
In step S204, the virtual field-of-view image for being superimposed with the binocular view of real-world object is presented to user.
Here, the virtual scene image of reflection virtual scene can be obtained, and by by the binocular view of real-world object and empty
Quasi- scene image is overlapped to generate the virtual field-of-view image for the binocular view for being superimposed with real-world object.
Due in virtual scene image dummy object and real-world object on three-dimensional space there may be blocking, in order to
Reduction blocks interference mutually, can show real-world object in the following manner:
As an example, the binocular image of real-world object can be shown in a manner of translucent.It can be according in virtual cyclogram
The content type of application interface shown in as in and/or with the interaction scenario of user to determine whether in translucent fashion display it is true
Real object.For example, detecting game role in the interface of virtual game when user carries out virtual game by head-mounted display
Movement frequently, and needs a large amount of user to interactively enter, and can show real-world object in translucent fashion.When detecting virtual view
When the application interface shown in wild image is that the input control frequency of virtual theater or user reduce, it can terminate with translucent
Mode shows real-world object.Similarly, real-world object can be shown as to contour line or 3D grid lines.
As another example, the dummy object in virtual scene image can be zoomed in and out and/or is moved, so as to effective
It avoids the problem that and real-world object blocks on three-dimensional space.For example, can be when head-mounted display runs virtual theater, it will be empty
The virtual screen shown in quasi- field-of-view image is reduced and is moved, to avoid be overlapped with real-world object.
Figure 17 shows the example of display real-world object according to an exemplary embodiment of the present invention.(a) in Figure 17 is shown
Real-world object is shown as contour line, (b) in Figure 17, which is shown, to be zoomed in and out and move to the dummy object in virtual scene image
It is dynamic.
In addition, when in virtual scene image dummy object and real-world object on three-dimensional space exist block when, can
The priority of display is judged.Display priority list can be preset, to the virtual object in virtual image in the list
The display priority of body and real-world object sequence is ranked up according to its importance and urgency level.The display priority list can
It is arranged automatically by head-mounted display, can also be configured by user according to personal use habit.It can be by described
Priority is searched in list to determine switching over using which kind of display mode, and between different display modes.
In step S205, the true object being added in virtual field-of-view image is added and/or deleted depending on the user's operation
Body.Real-world object around user is excessive, excessive real-world object may be superimposed in virtual field-of-view image and influence user
The effect of virtual scene image is watched, therefore user may be selected which real-world object is presented in virtual field-of-view image, or removed
Which real-world object.
Figure 18 shows the real-world object according to an exemplary embodiment of the present invention for deleting the virtual field-of-view image that is added to.Such as figure
Shown in 18, the real-world object shown in virtual field-of-view image can be removed by the gesture operation waved.
In step S206, it is determined whether need to continue that real-world object is presented to user.
As an example, can determine in the case where detecting that user completes feed and need not continue to present really to user
Object.It can detect whether user completes to feed by following implementation:
As an example, in the case where detecting predetermined key operation, it may be determined that user completes feed.Wherein, operated
Key can be the hardware button on head-mounted display, the key being also possible on the display screen of head-mounted display,
When detecting that user presses predetermined key in a predefined manner, it may be determined that user completes feed.Operated key can also be
Virtual key a, that is, virtual interface is superimposed in virtual field-of-view image, by the interaction for detecting user and this virtual interface
Situation is judged.Wherein, the predetermined way can be at least one among following manner: short-press, long-pressing, short-press are pre-
Determine number, short-press and long-pressing alternately etc..
As another example, in the case where detecting the prearranged gesture of user, it may be determined that user, which feeds, to complete.It is described pre-
Determining gesture can be the gesture of singlehanded completion, be also possible to the gesture of both hands completion, and the particular content of the prearranged gesture can be with
Be following gesture at least one of item: wave, hand draws circle, hand draws square, hand draws triangle, gesture of finding a view etc..It can be used
Existing gesture detecting devices detects and identifies particular content indicated by gesture.
As another example, it can determine whether user completes to feed by detecting and identifying the label on object.For example,
Can be labeled on different classes of object instruction respective classes to be identified to different classes of object, wear-type is shown
Device can identify label.For example, the 1st class label can be attached on desk to identify desk, the 2nd class label can be attached to chair
Chair is identified on son, the 3rd class label can be attached on tableware to mark tableware.When can't detect the 3rd class label, it may be determined that
User completes feed.
As another example, whether user can be determined by the telecommand of input of the detection user in other equipment
Complete feed.Here, the other equipment can be among following equipment at least one of: mobile communication terminal, tablet computer,
Personal computer, peripheral hardware keyboard, wearable device, handle etc..
As another example, can determine whether user is complete by the phonetic order or specific audible signal of identification user
At feed.
When step S206 determination needs not continue to that real-world object is presented to user, step S207 is executed.In step
The virtual field-of-view image for not being superimposed with the binocular view of real-world object is presented to user by S207, that is, terminates and present really to user
Only virtual scene image is presented to user in the binocular view of object.
It should be understood that the specific embodiment in the above-mentioned application scenarios for feeding, be equally applicable to drink water is answered
Use scene.When user drinks water, the movement drunk water is made of several continuous linkings of movement by a small margin: crawl cup moves cup
It to mouth, bows and drinks water, cup is put back to.Compared to general scene, the visual field that when drinking-water requires is wider, need to show cup with
The relative position of other objects of periphery, for example, the relative position with desk.It therefore, can be according to following display mode by true object
The binocular view of body is added to the virtual field-of-view image that wear-type is shown: in a form of picture-in-picture by the binocular view of real-world object
It is shown on virtual scene image (that is, the binocular vision of the real-world object reduced in the display of some position of virtual scene image
Figure), only show the binocular view of real-world object without showing virtual scene image (that is, only display is very in virtual field-of-view image
The binocular view of real object, the true scene that user seems transmitted through glasses to look around), by virtual scene image with picture-in-picture
Form be shown on the binocular view of real-world object (that is, real-world object binocular view some position display reduce
Virtual scene image), the binocular view of real-world object spatially merged into display (for example, virtual with virtual scene image
The binocular view of real-world object is shown on scene image in a manner of translucent).Figure 19 shows exemplary implementation according to the present invention
The example of the binocular view of the display real-world object of example.(a) in Figure 19 shows the binocular view by real-world object with picture-in-picture
Form be shown on virtual scene image, (b) in Figure 19, which is shown, only shows that the binocular view of real-world object is empty without showing
Intend scene image, (c) in Figure 19 shows the binocular vision shown virtual scene image in a form of picture-in-picture in real-world object
On figure, (d) in Figure 19 shows the binocular view for showing real-world object in a manner of translucent on virtual scene image.
Hereinafter, by describing to avoid the application to collide with real-world object when wearing head-mounted display in conjunction with Figure 20
Scene.In order to avoid when using head-mounted display, the physical feeling of user is close to real-world object or the body of user
Position moves towards real-world object and causes to be collided with real-world object (for example, user is carrying out such as fist using head-mounted display
Hit, the virtual somatic sensation television game such as golf when, the object of surrounding or the object close to user may be collided), can be in user
It is superimposed with the virtual field-of-view image close to the binocular view of the object of the body of user now to prompt user.
Figure 20 shows the object according to an exemplary embodiment of the present invention that shows and may collide in head-mounted display
The flow chart of the method for body.Here, the Overall Steps in the method can be executed by head-mounted display, can also be by wear-type
Display executes a part of step, and another part step is executed by the processor except head-mounted display.
As shown in figure 20, in step S301, it is determined whether the real-world object around needing to present to user, that is, determine and use
With the presence or absence of the object that may be collided around family.It can be detected by following implementation around user with the presence or absence of possible
The object to collide with user.
As an example, can be by the filming apparatus on head-mounted display (for example, wide-angle camera, depth camera
Deng), and/or independently of other filming apparatus and/or sensor of head-mounted display, obtain the 3D scene information around user
And position and the movement of user.When detecting that the object around user distance is excessively close (for example, distance is less than risk distance threshold
Value), it may be determined that the object around needing to present to user.
As another example, the position of user and each position of user's body can be determined by filming apparatus and sensor
Movement tendency, and determine whether user may touch the true of surrounding according to the position of user and the movement tendency of user
Real object, when determining that user may touch the real-world object of surrounding, it may be determined that the possibility around needing to present to user
The object that can be touched;When determining that user there is no fear of touching the real-world object of surrounding, it may be determined that do not need be in user
Real-world object around now.
As another example, each position in the position of user, user's body can be determined by filming apparatus and sensor
The position of object around movement tendency, user and movement tendency, and according to the movement of the position of user, each position of user's body
The position of object around trend, user and movement tendency determine whether user may touch the real-world object of surrounding,
When determining that user may touch the real-world object of surrounding, it may be determined that may touch around needing to present to user
Object;When determining that user there is no fear of touching the real-world object of surrounding, it may be determined that do not need around being presented to user
Real-world object.
In step S302, the image around user is captured by filming apparatus, and detect and may send out from captured image
The image of the object of raw collision.Can the shape to the object that may be collided specifically identified, for example, may recognize that possibility
The information such as contour line, 3D grid lines, the 3D model of the object to collide.
The possibility detected is sent out based on the positional relationship between the filming apparatus and user's binocular in step S303
The image of the object of raw collision carries out view-point correction to obtain the binocular view for the object that may be collided.
In step S304, the virtual cyclogram for being superimposed with the binocular view for the object that may be collided is presented to user
Picture.
As an example, the object that can will likely be collided is shown as among translucent, contour line or 3D grid lines extremely
One item missing.The object that will likely be collided is shown as contour line and only shows in the marginal position for the object that may be collided
Contour line, to reduce the influence to virtual scene image.
In addition it is also possible to which text, mark image, audio, the modes such as video prompt around user in the presence of may touch
The object hit, for example, can in virtual field-of-view image display reminding information (for example, being shown in a manner of text and/or figure)
To prompt user at a distance from the object that may be collided.
In step S305, it is determined whether the real-world object around needing to continue to present to user.
As an example, when detecting the object that possibility of the user far from display collides, it may be determined that do not need again
The object is shown to user;When detecting the object that may be collided far from user, it may be determined that do not need again to
Family shows the object;When receiving the instruction of cancellation display real-world object of user, it may be determined that do not need to present to user
Real-world object, wherein described instruction can be at least one among following item: phonetic order, key command, virtual mouse refer to
It enables, the instruction from wearable device etc..
When step S305 determines the real-world object around needing not continue to present to user, step S306 is executed.?
The virtual field-of-view image for not being superimposed with the binocular view of real-world object is presented to user by step S306, that is, terminates and present to user
Only virtual scene image is presented to user in the binocular view of real-world object.
It is also possible to provide risk distance threshold value, and for used in different risk distance and danger classes not
Same display mode, so as to prompt user using suitable mode according to the dangerous situation detected.
Hereinafter, by display items of the display about internet of things equipment are described in head-mounted display in conjunction with Figure 21 to Figure 25
Purpose application scenarios.To which user can still be able to know the work of the internet of things equipment of surrounding when using head-mounted display
State and other relevant informations.
Figure 21 shows the display according to an exemplary embodiment of the present invention in head-mounted display about internet of things equipment
The flow chart of display items purpose method.Here, the Overall Steps in the method can be executed by head-mounted display, can also be by
Wear-type display executes a part of step, and another part step is executed by the processor except head-mounted display.
As shown in figure 21, in step S401, the display project about internet of things equipment is obtained.
Here, the following item of the display project expression thing networked devices at least one of: operation interface, operation shape
State, notification information, instruction information.
As an example, can be monitored in real time to the true field range of user, when in the true visual field for detecting user
When having internet of things equipment to occur, its corresponding display project can be obtained according to the type of internet of things equipment.For example, can be according to head
Information measured by the Inertial Measurement Unit (Inertial Measurement Unit, IMU) of head mounted displays and user place
The facility map in room monitors the true field range of user in real time.It can also be by being installed on head-mounted display
The visual field of filming apparatus analyzed to obtain.
For example, can obtain monitoring camera when detecting that monitoring camera is located in the true visual field of user and take
Monitored picture.When detecting that air-conditioning is located in the true visual field of user, temperature, the humidity etc. of air-conditioning can be obtained by communication
Parameter.When clock is located in the true visual field of user upon this detection, the time of clock display can be obtained by captured image.When
When detecting that cooking appliance (for example, oven) is located in the true visual field of user, the working conditions such as temperature can be obtained.When detecting
When mobile communication terminal (for example, smart phone) is located in the true visual field of user, operation circle of mobile communication terminal can be obtained
Face.
In addition, the display items about internet of things equipment can also be received from the internet of things equipment outside the true visual field for being located at user
Mesh.For example, the intelligent doorbell at gate can send to head-mounted display and notify, wear-type is shown when there is guest to reach gate
Device can receive the image to doorway.For example, head-mounted display can be in communication with each other with the mobile communication terminal of user, work as movement
When communication terminal receives the message for needing user response, wear-type can be sent by the operation interface of mobile communication terminal and shown
Device.
In step S402, the display project that will acquire is added to virtual field-of-view image.Here, real-world object can be superimposed with
Binocular view virtual field-of-view image on be further superimposed the display project of internet of things equipment.However, it should be understood that the present invention is simultaneously
It is not limited to this, virtual field-of-view image may not include the binocular view of any real-world object, or even may not include any virtual field
Scape image, and only show the display project of internet of things equipment.
In step S403, is presented to user and be superimposed with the virtual field-of-view image of display items purpose.It here, can be according to any appropriate
Layout the display project of Internet of Things is presented, so that user can realize good interact with internet of things equipment, it is preferable that also
The interaction between user and virtual scene and real-world object can be combined.
Figure 22 shows according to an exemplary embodiment of the present invention present to user and is superimposed with the virtual cyclogram of display items purpose
The example of picture.As shown in figure 22, temperature, the humidity of air-conditioning, clock can be shown in the virtual field-of-view image of head-mounted display
The time of display, the working condition of oven, operation interface and access control equipment the institute captured image of mobile communication terminal, this
Outside, it may also display arrow instruction image, to prompt user that orientation of the oven relative to user of the cooking is completed.
In step S404, operated according to user for display items purpose to be remotely controlled internet of things equipment and execute corresponding processing.
For example, head-mounted display can be with mobile communication terminal phase intercommunication if internet of things equipment is mobile communication terminal
Letter, when mobile communication terminal receives the message for needing user response, can show the operation interface of mobile communication terminal
In virtual field-of-view image, user can carry out corresponding operation by head-mounted display, for example, remote-controlled mobile communication terminal is dialled
It beats and receives calls.In addition, the defeated of detection user can be passed through when user completes the remote control to mobile communication terminal and/or checks
Enter to terminate display project of the display about mobile communication terminal in virtual field-of-view image, before the input mode of user can refer to
Input mode is stated, is repeated no more.Figure 23 shows according to an exemplary embodiment of the present invention present to user and is superimposed with mobile communication
The example of the virtual field-of-view image of the operation interface of terminal.It as shown in figure 23, can be in the virtual field-of-view image of head-mounted display
The operation interface of middle display mobile communication terminal, the message received so as to user's timely learning mobile communication terminal and/or distant
Control mobile communication terminal.
For example, head-mounted display, which is used, in user carries out virtual game, mobile communication terminal has incoming call and movement is logical
Terminal is believed outside the true visual field of user, at this point, head-mounted display can receive the incoming information of mobile communication terminal transmission simultaneously
It is shown in virtual field-of-view image, so that user does not need to stop game, important incoming call will not be missed.If user is true
Surely incoming call answering is needed, it can be by the direct incoming call answering of head-mounted display (for example, using head-mounted display as bluetooth headset
Answered), in addition, instruction information can also be shown (for example, passing through arrow indicator, text to user in virtual field-of-view image
The modes such as word) to indicate orientation of user's mobile communication terminal relative to user.If the user determine that not incoming call answering, can pass through
Head-mounted display is directly hung up the telephone, alternatively, remote-controlled movement communication terminal is hung up the telephone, in addition, user can also be without appointing
What is operated.If the user desired that crossing can wire back, then task or the remote-controlled movement that telegram in reply can be arranged in head-mounted display are logical
Believe that terminal setting is wired back to remind.
Figure 24 show it is according to an exemplary embodiment of the present invention presented to user be superimposed with mobile communication terminal carry out telecommunications
The example of the virtual field-of-view image of breath.It as shown in figure 24, can be in head-mounted display when mobile communication terminal receives incoming call
Virtual field-of-view image in show the incoming information of mobile communication terminal, so as to the incoming call of user's timely learning mobile communication terminal
Information, also, may also display arrow instruction image, to prompt user's mobile communication terminal relative to the orientation of user, in addition, with
The task of telegram in reply can be also arranged in family in head-mounted display or the setting of remote-controlled movement communication terminal is wired back and reminded.
For example, mobile communication terminal receives short message, sent at this point, head-mounted display can receive mobile communication terminal
Short message and shown in virtual field-of-view image.If the user desired that return information, can compile in head-mounted display
Information is collected, and remote-controlled movement communication terminal is replied, in addition, can also show instruction information (for example, referring to by arrow to user
Show the modes such as symbol, text) to indicate orientation of user's mobile communication terminal relative to user.If the user desired that crossing can write in reply
Return information is arranged in breath, the then task or remote-controlled movement communication terminal that return information can be arranged in head-mounted display
It reminds.If the user desired that carrying out telephonic communication with addresser, electricity can be dialed by head-mounted display depending on the user's operation
Talk about (for example, being dialed head-mounted display as bluetooth headset), in addition, can also to user show indicate information (for example,
Pass through the modes such as arrow indicator, text) to indicate orientation of user's mobile communication terminal relative to user.
Figure 25 shows according to an exemplary embodiment of the present invention present to user and is superimposed with what mobile communication terminal received
The example of the virtual field-of-view image of short message.As shown in figure 25, when mobile communication terminal receives short message, wear-type is shown
Device can show the short message that mobile communication terminal receives in virtual field-of-view image, so that user's timely learning mobile communication is whole
The short message received is terminated, also, may also display arrow instruction image, to prompt user's mobile communication terminal relative to user's
Orientation, in addition, user can also directly call addresser by head-mounted display.
In addition, user is settable to need to show display items purpose internet of things equipment.The mode that list selection can be used, is listed
The ID of all internet of things equipment, user, which can choose or cancel, chooses internet of things equipment.And can for each internet of things equipment into
Row in detail setting, the capable of emitting type of message of each internet of things equipment can be listed, for each type of message it is settable whether
It needs to show, and the mode of display.
In addition, whether the application about head-mounted display operation can be disturbed, settable multiple ranks, from can receive
Any message (for example, virtual theater application), to being not intended to be disturbed (the real-time virtual online game of fierceness confrontation).Advanced
In other application, can take influences prompting mode (for example, the bright spot flashed) small as far as possible;In the application of low level, it can incite somebody to action
The full content of message is shown.
Hereinafter, by being worn to describe display real-world object according to an exemplary embodiment of the present invention in conjunction with Figure 26 to Figure 30
Formula display.Device included by the head-mounted display of display real-world object according to an exemplary embodiment of the present invention is combinable
Special device (for example, senser element) Lai Shixian, as an example, described device can be compiled by digital signal processor, scene
The common hardwares such as journey gate array processor is realized, can also be realized by dedicated hardware processors such as special chips, can also be complete
It is realized full by computer program with software mode, is used to show in head-mounted display for example, being implemented as being mounted on
Module in the application of real-world object, or it is implemented as the function program realized in the operating system of head-mounted display.This
Outside, alternately, it can be integrated in some or all of in described device in the head-mounted display of display real-world object, this
Invention to this with no restriction.
Figure 26 shows the block diagram of the head-mounted display of display real-world object according to an exemplary embodiment of the present invention.Such as figure
Shown in 26, the head-mounted display of display real-world object according to an exemplary embodiment of the present invention includes: that real-world object view obtains
Take device 10 and display device 20.
Particularly, real-world object view acquisition device 10 is used to obtain the binocular vision for the real-world object being located at around user
Figure.
Here, binocular view is to pass through true object for the binocular view of the human eye for the user for wearing head-mounted display
The binocular view of body, user's brain can obtain the depth information of real-world object, and then experience the practical three-dimensional of real-world object
Spatial position and three-dimensional posture, that is, the three-dimensional space for the real-world object that user is experienced by the binocular view of real-world object
Position and three-dimensional posture and user pass through eye-observation and the three-dimensional space position of real-world object and solid posture experienced are
It is consistent.
As an example, real-world object can be according to thingness or usage scenario and pre-set presentation object, it can
Including among following item at least one of: the object specified close to the object of the body of user, the object of label, user,
Object needed for the application run on head-mounted display currently needs object to be used, operational controls.
Display device 20 is used to that the virtual field-of-view image for being superimposed with the binocular view of real-world object to be presented to user.By upper
Head-mounted display is stated, user can watch the binocular view of real-world object (that is, enhancing is virtual existing in virtual field-of-view image
It is real), the practical three-dimensional space position and solid posture of real-world object are experienced, accurately judges the position of real-world object and itself
The three-dimensional posture of relationship and real-world object completes the necessary behavior for needing visual pattern to feed back.
Here, it should be appreciated that can be superimposed with by the display device being integrated on head-mounted display to be presented to user
The virtual field-of-view image of the binocular view of real-world object, can also by other display devices external with head-mounted display come to
User is presented, the invention is not limited in this regard.
Figure 27 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure.As shown in figure 27, the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention, which removes, includes
Except real-world object view acquisition device 10 and display device 20 shown in Figure 26, it may also include image capture apparatus 30 and binocular
View generation device 40.
Particularly, image capture apparatus 30 is used to capture the true object including being located at around user by filming apparatus
The image of body.
Binocular view generation device 40 is used to obtain the binocular for the real-world object being located at around user according to captured image
View.
As an example, image capture apparatus 30 can be captured by single filming apparatus including true around user
The image of object, binocular view generation device 40 can obtain the binocular for the real-world object being located at around user according to captured image
View.
Here, the single filming apparatus can be common filming apparatus only with a visual angle, due to image capture
30 captured images of device do not have depth information, and therefore, correspondingly, binocular view generation device 40 can be from captured image
Middle detection real-world object image, the real-world object image of another viewpoint is determined based on the real-world object image detected, and is based on
The real-world object image of the real-world object image and another viewpoint that detect obtains the binocular view of real-world object.
Here, in real-world object image, that is, institute's captured image real-world object region image.For example, binocular view
Existing various image-recognizing methods can be used to detect real-world object image from institute's captured image in generation device 40.
As an example, binocular view generation device 40 can be based on the position between the single filming apparatus and user's binocular
It is true to obtain to carry out view-point correction to the real-world object image of the real-world object image and another viewpoint detected for relationship
The binocular view of object.
As another example, binocular view generation device 40 can obtain the double of real-world object based on the stereo-picture of shooting
Eye diagram.Particularly, image capture apparatus 30 can capture the real-world object including being located at around user by filming apparatus
Image, binocular view generation device 40 can detect real-world object image from captured image, based on the true object detected
Body image obtains the binocular view of real-world object, wherein the filming apparatus includes depth camera, alternatively, the shooting
Device includes at least two single view cameras.Here, at least two single views camera can have the visual angle of coincidence, from
And the stereo-picture with depth information can be captured by depth camera or at least two single view cameras.
As an example, binocular view generation device 40 can be closed based on the position between the filming apparatus and user's binocular
System carries out view-point correction to the real-world object image detected to obtain the binocular view of real-world object.
It should be understood that as an example, above-mentioned single filming apparatus, depth camera or single view camera can be and wear
The built in camera of formula display is also possible to be mounted on the attachment filming apparatus on head-mounted display, for example, it may be
The camera that other equipment (for example, smart phone) have, the invention is not limited in this regard.
Preferably, it can't detect the true object of the desired display object among real-world object in binocular view generation device 40
When body image, image capture apparatus 30 can expand shooting visual angle and carry out the image that recapture includes desired display object.
Alternatively, can't detect the real-world object of the desired display object among real-world object in binocular view generation device 40
When image, image capture apparatus 30 user can be prompted to turn to desired display object where to carry out recapture include that expectation is aobvious in orientation
Show the image of object.For example, user can be prompted by image, text, audio, video etc..As an example, image capture apparatus
30 can be by the three-dimensional of three-dimensional space position or the real-world object obtained via positioning device based on pre-stored real-world object
The image that recapture includes desired display object is carried out come the orientation where prompting user to turn to desired display object in spatial position.
Figure 28 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure.As shown in figure 28, the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention, which removes, includes
Except real-world object view acquisition device 10 and display device 20 shown in Figure 26, it may also include virtual field-of-view image generation device
50。
Particularly, virtual field-of-view image generation device 50 is for generating the virtual of the binocular view for being superimposed with real-world object
Field-of-view image.
As an example, the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention may be used also
It include: virtual scene image acquiring device (not shown).
Virtual scene image acquiring device is used to obtain the virtual scene image of reflection virtual scene, wherein the virtual visual field
Image forming appts 50 are generated and are superimposed with really by the way that the binocular view of real-world object and virtual scene image to be overlapped
The virtual field-of-view image of the binocular view of object.That is, the virtual field-of-view image presented to user is spatially to have merged really
The binocular view of object and the virtual field-of-view image of virtual scene image, so that user can be in normal head-mounted display
Under virtual scene experience, the necessary behavior interacted with real-world object for needing visual pattern to feed back is completed.
Here, virtual scene image is needs corresponding with the currently running application of head-mounted display in the virtual of user
The image of the reflection virtual scene presented in the visual field to user.For example, if the currently running application of head-mounted display is fist
It hits, the virtual somatic sensation television game such as golf, then virtual scene image is that reflection from the virtual visual field to user that need to present in is empty
The image of quasi- scene of game;If the currently running application of head-mounted display is the application for watching film, virtual field
Scape image is the image for needing the reflection virtual theater screen scene presented in the virtual visual field to user.
As an example, virtual field-of-view image generation device 50 can be according to one of following display mode by the binocular of real-world object
View is added to the virtual field-of-view image of head-mounted display: only showing the binocular view of real-world object without showing virtual scene
Image, only show virtual scene image without showing the binocular view of real-world object, by the binocular view of real-world object and virtual
Scene image spatially merges display, shows the binocular view of real-world object in virtual scene image in a form of picture-in-picture
Above, virtual scene image is shown in a form of picture-in-picture on the binocular view of real-world object.
As an example, real-world object can be shown as translucent, contour line or 3D net by virtual field-of-view image generation device 50
At least one of among ruling.For example, virtual field-of-view image generation device 50 can in virtual scene image dummy object with
Real-world object is shown as translucent, contour line or 3D grid lines on three-dimensional space in the presence of blocking by real-world object
At least one of among, to reduce blocking to the dummy object in virtual scene image, reduce to viewing virtual scene image
Influence.
In addition, as an example, there is screening in dummy object and real-world object in virtual scene image on three-dimensional space
In the case where gear, virtual field-of-view image generation device 50 can also be zoomed in and out and/or be moved to dummy object.For example, virtual view
Wild image forming appts 50 can have the dummy object blocked with real-world object in virtual scene image on three-dimensional space
It zooms in and out and/or moves, all dummy objects in virtual scene image can also contract in the presence of blocking
It puts and/or moves.It should be understood that virtual field-of-view image generation device 50 can judge automatically the dummy object in virtual scene image
With circumstance of occlusion of the real-world object on three-dimensional space, and correspondingly dummy object is zoomed in and out and/or moved.In addition, empty
Quasi- field-of-view image generation device 50 can also zoom in and out dummy object and/or move depending on the user's operation.
Moreover it is preferred that virtual field-of-view image generation device 50 can add and/or delete depending on the user's operation superposition
To the real-world object of virtual field-of-view image.That is, user can be true by what is do not shown in virtual field-of-view image according to their own needs
The binocular view of real object is also added in virtual field-of-view image, and/or delete from virtual field-of-view image it is extra, do not need
The binocular view of the real-world object of presentation, to reduce the influence to viewing virtual scene image.
Preferably, the virtual visual field for being superimposed with the binocular view of real-world object can just be presented to user in appropriate circumstances
Image, and the virtual field-of-view image for presenting to user and being superimposed with the binocular view of real-world object can be terminated in appropriate circumstances.
Figure 29 shows the block diagram of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention.Such as Figure 29
Shown, it includes shown in Figure 26 that the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention, which removes,
Except real-world object view acquisition device 10 and display device 20, it may also include display control unit 60.
Particularly, display control unit 60 is used to determine whether to need the real-world object around user presentation user,
Wherein, display control unit 60 determine need to real-world object around user presentation user in the case where, real-world object view
Figure acquisition device 10 obtains the binocular view for the real-world object being located at around user and/or display device 20 is presented to user and is superimposed
There is the virtual field-of-view image of the binocular view of real-world object.
As an example, the scene that display control unit 60 is interacted in the real-world object detected around needs and user
When, it may be determined that need the real-world object around user presentation user.For example, it is desired to be handed over the real-world object around user
Mutual scene may include following item at least one of: need by real-world object execute input operation scene (for example, need
To be executed using keyboard, mouse, handle etc. input operation scene), need avoid with real-world object collide scene (example
Such as, need the scene for hiding close people), need to clutch the scene of real-world object (for example, it is desired to eat or drink water
Scene).
An exemplary embodiment of the present invention can determine the opportunity that real-world object is presented according to various situations, as
Example, among the following conditions at least one of when being satisfied, display control unit 60 can determine needs to user presentation user
The real-world object of surrounding: user's input that real-world object is presented in request is received;It determines the real-world object around user and presets
Presentation object match;It is detected in the application interface shown in virtual field-of-view image and needs to execute behaviour using real-world object
The control of work;Detect the physical feeling of user close to real-world object;Detect that the physical feeling of user is transported towards real-world object
It is dynamic;Determine that the application run on head-mounted display is currently needed using real-world object;It determines and reaches around preset and user
Time for interacting of real-world object.
Can determine whether that user at which needs to see the real-world object of surrounding at moment by display control unit 60, that is, need to
The opportunity of the binocular view of the real-world object of surrounding is presented in user, convenient for practical three of the real-world object around user's timely learning
Dimension space position and three-dimensional posture.
As an example, display control unit 60 may further determine that the true object for whether needing to continue around user presentation user
Body, wherein in the case where the determination of display control unit 60 needs not continue to the real-world object around user presentation user, show
The virtual field-of-view image for not being superimposed with the binocular view of real-world object can be presented in showing device 20 to user.
As an example, the scene that display control unit 60 is interacted in the real-world object detected around needs and user
At the end of, it may be determined that need not continue to the real-world object around user presentation user.
As an example, display control unit 60, which can determine, not to be needed when at least one among the following conditions is satisfied
Real-world object is presented to user: receiving and terminates user's input that real-world object is presented;Determine real-world object around user with
Preset presentation object mismatches;It does not detect and is needed using true object in the application interface shown in virtual field-of-view image
Body executes the control of operation;Determine the physical feeling of user far from real-world object;Determine the application run on head-mounted display
It does not need using real-world object currently;Determine user during preset time period without executing operation using real-world object;It determines
User can execute operation in the case where not watching real-world object.
About request present real-world object and/or terminate present real-world object user input, as an example, can by with
Among lower item at least one of realize: contact action, physical button operation, telecommand input operation, acoustic control operation, gesture
Movement, headwork, body action, sight movement, touch action, gripping action.
About preset presentation object, the object that the needs of default setting in head-mounted display are presented can be, it can also
To be user's object that set needs are presented according to their own needs.For example, preset presentation object can be food, meal
Tool, the hand of user, the object for posting specific label, people etc..
It can be in the case where needing the real-world object around user presentation user by display control unit 60, Xiang Yong
The virtual field-of-view image (that is, virtual scene image only is presented to user) for not being superimposed with the binocular view of real-world object is presented in family,
Virtual scene image is watched not influence user.
Figure 30 shows the frame of the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention
Figure.As shown in figure 30, the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention, which removes, includes
Except real-world object view acquisition device 10 and display device 20 shown in Figure 26, it may also include display project acquisition device 70.
Display project acquisition device 70 is used to obtain the display project about internet of things equipment, and the display project that will acquire
At least one of be added to virtual field-of-view image, wherein the following item of the display project expression thing networked devices: operation
Interface, mode of operation, notification information, instruction information.
Here, notification information can be the information such as text, audio, video, image.For example, if internet of things equipment is logical
Believe equipment, then notification message can be the text information about missed call;If internet of things equipment is access control equipment, notify
Message can be captured monitoring image.
Indicate that information is to be used to indicate user to find the information such as text, audio, video, the image of internet of things equipment.For example,
Instruction information can be arrow indicator, and user can obtain orientation of the internet of things equipment relative to user according to the direction of arrow;
Indicate that information is also possible to indicate the text of the relative position of user and internet of things equipment (for example, communication equipment is in your left front
At two meters).
As an example, display project acquisition device 70 can be obtained by least one among following processing about Internet of Things
The display project of net equipment: capture is located at the image of the internet of things equipment in the true visual field of user, and from the Internet of Things of capture
Display project of the image zooming-out of equipment about internet of things equipment;Out of the true visual field that be located at user and/or outside the true visual field
Internet of things equipment receive display project about internet of things equipment;Sensing is located at the internet of things equipment outside the true visual field of user
Position relative to head-mounted display is as instruction information.
In addition, the head-mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention can also wrap
It includes: control device (not shown).
Control device is used to be operated according to user for display items purpose to be remotely controlled internet of things equipment and execute corresponding processing.
By above-mentioned head-mounted display, user can know the internet of things equipment of surrounding when using head-mounted display
Relevant information, in addition, also remote-controlled internet of things equipment executes corresponding processing.
It should be understood that the head-mounted display of display real-world object according to an exemplary embodiment of the present invention can be to according to figure
5 execute corresponding processing to concrete application scene described in Figure 25, and which is not described herein again.
The method and its wear-type according to an exemplary embodiment of the present invention that real-world object is shown in head-mounted display
The virtual field-of-view image of the binocular view for the real-world object being superimposed with around user can be presented in display to user, thus with
Family still be able to perceive when wearing head-mounted display the practical three-dimensional space position of the real-world object of surrounding, three-dimensional posture and
The information such as association attributes, the real-world object convenient for user and surrounding interact, and completion is necessary to need visual pattern to feed back
Behavior.Additionally, the embodiment of the present invention can judge to need to show the opportunity of the binocular view of the real-world object of surrounding to user;
Additionally it is possible to show the binocular view of the real-world object of surrounding to user in virtual field-of-view image by way of being suitble to.
To which user can obtain the virtual visual field experience of enhancing.
Although having show and described some exemplary embodiments of the invention, it will be understood by those skilled in the art that
It, can be to these in the case where not departing from the principle and spirit of the invention defined by the claims and their equivalents
Embodiment is modified.
Claims (70)
1. a kind of method for showing real-world object in head-mounted display, comprising:
(A) the binocular view for the real-world object being located at around user is obtained;And
(B) the virtual field-of-view image for being superimposed with the binocular view of real-world object is presented to user,
Wherein, determine need to real-world object around user presentation user in the case where, execute automatically step (A) and/or
Step (B),
Wherein, it when at least one among the following conditions is satisfied, determines and needs that real-world object is presented to user: determining user
The real-world object of surrounding matches with preset presentation object;Detecting in the application interface shown in virtual field-of-view image needs
Real-world object is used to execute the control of operation;Detect the physical feeling of user close to real-world object;Detect the body of user
Body region is moved towards real-world object;Determine that the application run on head-mounted display is currently needed using real-world object;It determines
Reach the time that the real-world object around preset and user interacts.
2. according to the method described in claim 1, wherein, real-world object includes at least one among following item: close to user
The object of body, the object that the object of label, user specify, the current needs of application run on head-mounted display make
Object needed for object, operational controls.
3. according to the method described in claim 1, further include: when at least one of (C) among the following conditions is satisfied, Xiang Yong
The virtual field-of-view image for not being superimposed with the binocular view of real-world object is presented in family: the user for receiving termination presentation real-world object is defeated
Enter;Determine that real-world object and preset presentation object around user mismatch;What is shown in virtual field-of-view image applies boundary
The control for needing that operation is executed using real-world object is not detected in face;Determine the physical feeling of user far from real-world object;
Determine that the application run on head-mounted display is not needed using real-world object currently;Determine that user does not have during preset time period
Have and executes operation using real-world object;Determine that user can execute operation in the case where not watching real-world object.
4. according to claim 1 to method described in any claim among 3, wherein step (A) includes: by single
Filming apparatus come capture including be located at user around real-world object image, according to captured image obtain be located at user around
Real-world object binocular view.
5. according to the method described in claim 4, wherein, in step (A), real-world object figure is detected from captured image
Picture determines the real-world object image of another viewpoint based on the real-world object image detected, and based on the real-world object detected
Image and the real-world object image of another viewpoint obtain the binocular view of real-world object.
6. according to the method described in claim 5, wherein, based on the true of the real-world object image and another viewpoint detected
Real object image come include: the step of obtaining the binocular view of real-world object based on the single filming apparatus and user's binocular it
Between positional relationship, view-point correction is carried out to the real-world object image of the real-world object image and another viewpoint detected and is come
Obtain the binocular view of real-world object.
7. according to claim 1 to method described in any claim among 3, wherein step (A) includes: to pass through shooting
Device come capture including be located at user around real-world object image, from captured image detect real-world object image, base
The binocular view of real-world object is obtained in the real-world object image detected, wherein the filming apparatus includes depth camera
Head, alternatively, the filming apparatus includes at least two single view cameras.
8. according to the method described in claim 7, wherein, the double of real-world object are obtained based on the real-world object image detected
The step of eye diagram includes: based on the positional relationship between the filming apparatus and user's binocular, to the real-world object detected
Image carries out view-point correction to obtain the binocular view of real-world object.
9. according to the method described in claim 5, wherein, can't detect the true of the desired display object among real-world object
When subject image, expand shooting visual angle and carry out the image that recapture includes desired display object, alternatively, prompt user turns to expectation
The image that recapture includes desired display object is carried out in orientation where display object.
10. according to the method described in claim 4, wherein, can't detect the true of the desired display object among real-world object
When real object image, expand shooting visual angle and carry out the image that recapture includes desired display object, alternatively, prompt user turns to the phase
The image that recapture includes desired display object is carried out in orientation where hoping display object.
11. according to the method described in claim 7, wherein, can't detect the true of the desired display object among real-world object
When real object image, expand shooting visual angle and carry out the image that recapture includes desired display object, alternatively, prompt user turns to the phase
The image that recapture includes desired display object is carried out in orientation where hoping display object.
12. according to claim 1 to method described in any claim among 3, wherein in step (B), obtain reflection
The virtual scene image of virtual scene, and generated by the way that the binocular view of real-world object to be overlapped with virtual scene image
Virtual field-of-view image.
13. according to the method described in claim 4, wherein, in step (B), obtaining the virtual scene figure of reflection virtual scene
Picture, and virtual field-of-view image is generated by the way that the binocular view of real-world object and virtual scene image to be overlapped.
14. according to the method described in claim 7, wherein, in step (B), obtaining the virtual scene figure of reflection virtual scene
Picture, and virtual field-of-view image is generated by the way that the binocular view of real-world object and virtual scene image to be overlapped.
15. according to the method described in claim 9, wherein, in step (B), obtaining the virtual scene figure of reflection virtual scene
Picture, and virtual field-of-view image is generated by the way that the binocular view of real-world object and virtual scene image to be overlapped.
16. according to the method for claim 12, wherein in step (B), dummy object in virtual scene image with
Real-world object zooms in and out dummy object and/or moves on three-dimensional space in the presence of blocking.
17. according to the method for claim 12, wherein in step (B), real-world object is shown as translucent, profile
At least one of among line or 3D grid lines.
18. according to the method for claim 12, wherein in step (B), according to one of following display mode by true object
The binocular view of body is added to the virtual field-of-view image of head-mounted display: only showing the binocular view of real-world object without showing
Virtual scene image, only show virtual scene image without showing the binocular view of real-world object, by the binocular vision of real-world object
Figure spatially merges display with virtual scene image, shows the binocular view of real-world object virtual in a form of picture-in-picture
It is shown on the binocular view of real-world object in a form of picture-in-picture on scene image, by virtual scene image.
19. according to the method for claim 13, wherein in step (B), dummy object in virtual scene image with
Real-world object zooms in and out dummy object and/or moves on three-dimensional space in the presence of blocking.
20. according to the method for claim 13, wherein in step (B), real-world object is shown as translucent, profile
At least one of among line or 3D grid lines.
21. according to the method for claim 13, wherein in step (B), according to one of following display mode by true object
The binocular view of body is added to the virtual field-of-view image of head-mounted display: only showing the binocular view of real-world object without showing
Virtual scene image, only show virtual scene image without showing the binocular view of real-world object, by the binocular vision of real-world object
Figure spatially merges display with virtual scene image, shows the binocular view of real-world object virtual in a form of picture-in-picture
It is shown on the binocular view of real-world object in a form of picture-in-picture on scene image, by virtual scene image.
22. according to claim 1 to 3 any one of method described in claim, wherein in step (B), according to
The real-world object of virtual field-of-view image of being added to is added and/or is deleted in the operation of user.
23. according to the method described in claim 4, wherein, in step (B), adds and/or delete depending on the user's operation
Be added to the real-world object of virtual field-of-view image.
24. according to the method described in claim 7, wherein, in step (B), adds and/or delete depending on the user's operation
Be added to the real-world object of virtual field-of-view image.
25. according to the method described in claim 9, wherein, in step (B), adds and/or delete depending on the user's operation
Be added to the real-world object of virtual field-of-view image.
26. according to the method for claim 12, wherein in step (B), add and/or delete depending on the user's operation
Except the real-world object for the virtual field-of-view image that is added to.
27. according to claim 1 to method described in any claim among 3, further includes:
(D) display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual field-of-view image,
At least one of in, the following item of the display project expression thing networked devices: operation interface, mode of operation, notice letter
Breath, instruction information.
28. according to the method described in claim 4, further include:
(D) display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual field-of-view image,
At least one of in, the following item of the display project expression thing networked devices: operation interface, mode of operation, notice letter
Breath, instruction information.
29. according to the method described in claim 7, further include:
(D) display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual field-of-view image,
At least one of in, the following item of the display project expression thing networked devices: operation interface, mode of operation, notice letter
Breath, instruction information.
30. according to the method described in claim 9, further include:
(D) display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual field-of-view image,
At least one of in, the following item of the display project expression thing networked devices: operation interface, mode of operation, notice letter
Breath, instruction information.
31. according to the method for claim 12, further includes:
(D) display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual field-of-view image,
At least one of in, the following item of the display project expression thing networked devices: operation interface, mode of operation, notice letter
Breath, instruction information.
32. according to the method for claim 22, further includes:
(D) display project about internet of things equipment is obtained, and the display project that will acquire is added to virtual field-of-view image,
At least one of in, the following item of the display project expression thing networked devices: operation interface, mode of operation, notice letter
Breath, instruction information.
33. according to the method for claim 27, wherein obtained by least one among following processing about Internet of Things
The display project of net equipment: capture is located at the image of the internet of things equipment in the true visual field of user, and from the Internet of Things of capture
Display project of the image zooming-out of equipment about internet of things equipment;Out of the true visual field that be located at user and/or outside the true visual field
Internet of things equipment receive display project about internet of things equipment;Sensing is located at the internet of things equipment outside the true visual field of user
Position relative to head-mounted display is as instruction information.
34. according to the method for claim 28, wherein obtained by least one among following processing about Internet of Things
The display project of net equipment: capture is located at the image of the internet of things equipment in the true visual field of user, and from the Internet of Things of capture
Display project of the image zooming-out of equipment about internet of things equipment;Out of the true visual field that be located at user and/or outside the true visual field
Internet of things equipment receive display project about internet of things equipment;Sensing is located at the internet of things equipment outside the true visual field of user
Position relative to head-mounted display is as instruction information.
35. according to the method for claim 27, further includes: (E) is operated for display items purpose according to user to be remotely controlled object
Networked devices execute corresponding processing.
36. according to the method for claim 28, further includes: (E) is operated for display items purpose according to user to be remotely controlled object
Networked devices execute corresponding processing.
37. a kind of head-mounted display for showing real-world object, comprising:
Real-world object view acquisition device obtains the binocular view for the real-world object being located at around user;And
The virtual field-of-view image for being superimposed with the binocular view of real-world object is presented to user for display device;
Display control unit, it is determined whether need the real-world object around user presentation user, wherein in display control unit
It determines in the case where needing to the real-world object around user presentation user, real-world object view acquisition device, which obtains, is located at user
The virtual of the binocular view for being superimposed with real-world object is presented to user for the binocular view and/or display device of the real-world object of surrounding
Field-of-view image,
Wherein, when at least one among the following conditions is satisfied, display control unit, which determines, to be needed to present really to user
Object: determine that real-world object and preset presentation object around user match;The application shown in virtual field-of-view image
The control for needing that operation is executed using real-world object is detected in interface;Detect the physical feeling of user close to real-world object;
Detect that the physical feeling of user is moved towards real-world object;Determine run on head-mounted display application currently need using
Real-world object;It determines and reaches the time that the real-world object around preset and user interacts.
38. the head-mounted display according to claim 37, wherein real-world object includes at least one among following item
: what is run on the object specified close to the object of the body of user, the object of label, user, head-mounted display answers
The object needed for currently needing object to be used, operational controls.
39. the head-mounted display according to claim 37, wherein at least one among the following conditions is satisfied
When, display control unit, which determines, does not need that real-world object is presented to user, and display device presents to user and is not superimposed with true object
The virtual field-of-view image of the binocular view of body: it receives and terminates user's input that real-world object is presented;It determines true around user
Real object and preset presentation object mismatch;Do not detect that needs make in the application interface shown in virtual field-of-view image
The control of operation is executed with real-world object;Determine the physical feeling of user far from real-world object;It determines and is transported on head-mounted display
Capable application is not needed using real-world object currently;Determine user during preset time period without executing behaviour using real-world object
Make;Determine that user can execute operation in the case where not watching real-world object.
40. according to head-mounted display described in any claim among claim 37 to 39, further includes:
Image capture apparatus captures the image of the real-world object including being located at around user by single filming apparatus;
Binocular view generation device obtains the binocular view for the real-world object being located at around user according to captured image.
41. head-mounted display according to claim 40, wherein binocular view generation device is examined from captured image
Real-world object image is surveyed, the real-world object image of another viewpoint is determined based on the real-world object image detected, and based on detection
To real-world object image and the real-world object image of another viewpoint obtain the binocular view of real-world object.
42. head-mounted display according to claim 41, wherein binocular view generation device is based on the single shooting
Positional relationship between device and user's binocular, to the real-world object figure of the real-world object image and another viewpoint that detect
The binocular view of real-world object is obtained as carrying out view-point correction.
43. according to head-mounted display described in any claim among claim 37 to 39, further includes:
Image capture apparatus captures the image of the real-world object including being located at around user by filming apparatus, wherein described
Filming apparatus includes depth camera, alternatively, the filming apparatus includes at least two single view cameras;
Binocular view generation device detects real-world object image, based on the real-world object image detected from captured image
To obtain the binocular view of real-world object.
44. head-mounted display according to claim 43, wherein binocular view generation device is based on the filming apparatus
With the positional relationship between user's binocular, view-point correction is carried out to the real-world object image detected to obtain the double of real-world object
Eye diagram.
45. head-mounted display according to claim 41, wherein can't detect true object in binocular view generation device
When the real-world object image of the desired display object among body, it includes the phase that image capture apparatus, which expands shooting visual angle to carry out recapture,
The image of display object is hoped, alternatively, the orientation where image capture apparatus prompt user turns to desired display object is caught again
Obtain the image including desired display object.
46. head-mounted display according to claim 40, wherein can't detect true object in binocular view generation device
When the real-world object image of the desired display object among body, it includes the phase that image capture apparatus, which expands shooting visual angle to carry out recapture,
The image of display object is hoped, alternatively, the orientation where image capture apparatus prompt user turns to desired display object is caught again
Obtain the image including desired display object.
47. head-mounted display according to claim 43, wherein can't detect true object in binocular view generation device
When the real-world object image of the desired display object among body, it includes the phase that image capture apparatus, which expands shooting visual angle to carry out recapture,
The image of display object is hoped, alternatively, the orientation where image capture apparatus prompt user turns to desired display object is caught again
Obtain the image including desired display object.
48. according to head-mounted display described in any claim among claim 37 to 39, further includes:
Virtual field-of-view image generation device, generates the virtual field-of-view image for being superimposed with the binocular view of real-world object.
49. head-mounted display according to claim 40, further includes:
Virtual field-of-view image generation device, generates the virtual field-of-view image for being superimposed with the binocular view of real-world object.
50. head-mounted display according to claim 43, further includes:
Virtual field-of-view image generation device, generates the virtual field-of-view image for being superimposed with the binocular view of real-world object.
51. head-mounted display according to claim 45, further includes:
Virtual field-of-view image generation device, generates the virtual field-of-view image for being superimposed with the binocular view of real-world object.
52. head-mounted display according to claim 48, further includes:
Virtual scene image acquiring device obtains the virtual scene image of reflection virtual scene,
Wherein, virtual field-of-view image generation device by by the binocular view of real-world object and virtual scene image be overlapped come
Generate the virtual field-of-view image for being superimposed with the binocular view of real-world object.
53. head-mounted display according to claim 49, further includes:
Virtual scene image acquiring device obtains the virtual scene image of reflection virtual scene,
Wherein, virtual field-of-view image generation device by by the binocular view of real-world object and virtual scene image be overlapped come
Generate the virtual field-of-view image for being superimposed with the binocular view of real-world object.
54. head-mounted display according to claim 52, wherein virtual field-of-view image generation device is in virtual scene figure
As in dummy object and real-world object on three-dimensional space exist block in the case where, dummy object is zoomed in and out and/or
It is mobile.
55. head-mounted display according to claim 52, wherein virtual field-of-view image generation device shows real-world object
Be shown as among translucent, contour line or 3D grid lines at least one of.
56. head-mounted display according to claim 52, wherein virtual field-of-view image generation device is according to following display
The binocular view of real-world object is overlapped by one of mode with virtual scene image: only show the binocular view of real-world object and
Do not show virtual scene image, only show virtual scene image without showing the binocular view of real-world object, by real-world object
Binocular view spatially merges display with virtual scene image, shows the binocular view of real-world object in a form of picture-in-picture
It is shown on the binocular view of real-world object in a form of picture-in-picture on virtual scene image, by virtual scene image.
57. head-mounted display according to claim 53, wherein virtual field-of-view image generation device is in virtual scene figure
As in dummy object and real-world object on three-dimensional space exist block in the case where, dummy object is zoomed in and out and/or
It is mobile.
58. head-mounted display according to claim 53, wherein virtual field-of-view image generation device shows real-world object
Be shown as among translucent, contour line or 3D grid lines at least one of.
59. head-mounted display according to claim 53, wherein virtual field-of-view image generation device is according to following display
The binocular view of real-world object is overlapped by one of mode with virtual scene image: only show the binocular view of real-world object and
Do not show virtual scene image, only show virtual scene image without showing the binocular view of real-world object, by real-world object
Binocular view spatially merges display with virtual scene image, shows the binocular view of real-world object in a form of picture-in-picture
It is shown on the binocular view of real-world object in a form of picture-in-picture on virtual scene image, by virtual scene image.
60. head-mounted display according to claim 48, wherein virtual field-of-view image generation device is according to the behaviour of user
Make to add and/or delete the real-world object of virtual field-of-view image of being added to.
61. head-mounted display according to claim 49, wherein virtual field-of-view image generation device is according to the behaviour of user
Make to add and/or delete the real-world object of virtual field-of-view image of being added to.
62. according to head-mounted display described in any claim among claim 37 to 39, further includes:
Display project acquisition device, obtains display project about internet of things equipment, and the display project that will acquire is added to void
At least one of quasi- field-of-view image, wherein the following item of the display project expression thing networked devices: operation interface, behaviour
Make state, notification information, instruction information.
63. head-mounted display according to claim 40, further includes:
Display project acquisition device, obtains display project about internet of things equipment, and the display project that will acquire is added to void
At least one of quasi- field-of-view image, wherein the following item of the display project expression thing networked devices: operation interface, behaviour
Make state, notification information, instruction information.
64. head-mounted display according to claim 43, further includes:
Display project acquisition device, obtains display project about internet of things equipment, and the display project that will acquire is added to void
At least one of quasi- field-of-view image, wherein the following item of the display project expression thing networked devices: operation interface, behaviour
Make state, notification information, instruction information.
65. head-mounted display according to claim 45, further includes:
Display project acquisition device, obtains display project about internet of things equipment, and the display project that will acquire is added to void
At least one of quasi- field-of-view image, wherein the following item of the display project expression thing networked devices: operation interface, behaviour
Make state, notification information, instruction information.
66. head-mounted display according to claim 48, further includes:
Display project acquisition device, obtains display project about internet of things equipment, and the display project that will acquire is added to void
At least one of quasi- field-of-view image, wherein the following item of the display project expression thing networked devices: operation interface, behaviour
Make state, notification information, instruction information.
67. head-mounted display according to claim 62, wherein among showing project acquisition device by following processing
At least one of obtain the display project about internet of things equipment: capture is located at the internet of things equipment in the true visual field of user
Image, and the display project from the image zooming-out of the internet of things equipment of capture about internet of things equipment;From positioned at the true of user
Internet of things equipment in the real visual field and/or outside the true visual field receives the display project about internet of things equipment;Sensing is located at user
The true visual field outside internet of things equipment relative to head-mounted display position as instruction information.
68. head-mounted display according to claim 62, further includes:
Control device is operated according to user for display items purpose to be remotely controlled internet of things equipment and execute corresponding processing.
69. head-mounted display according to claim 63, wherein among showing project acquisition device by following processing
At least one of obtain the display project about internet of things equipment: capture is located at the internet of things equipment in the true visual field of user
Image, and the display project from the image zooming-out of the internet of things equipment of capture about internet of things equipment;From positioned at the true of user
Internet of things equipment in the real visual field and/or outside the true visual field receives the display project about internet of things equipment;Sensing is located at user
The true visual field outside internet of things equipment relative to head-mounted display position as instruction information.
70. head-mounted display according to claim 63, further includes:
Control device is operated according to user for display items purpose to be remotely controlled internet of things equipment and execute corresponding processing.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510549225.7A CN106484085B (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
CN201910549634.5A CN110275619A (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
KR1020160106177A KR20170026164A (en) | 2015-08-31 | 2016-08-22 | Virtual reality display apparatus and display method thereof |
US15/252,853 US20170061696A1 (en) | 2015-08-31 | 2016-08-31 | Virtual reality display apparatus and display method thereof |
EP16842274.9A EP3281058A4 (en) | 2015-08-31 | 2016-08-31 | Virtual reality display apparatus and display method thereof |
PCT/KR2016/009711 WO2017039308A1 (en) | 2015-08-31 | 2016-08-31 | Virtual reality display apparatus and display method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510549225.7A CN106484085B (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910549634.5A Division CN110275619A (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106484085A CN106484085A (en) | 2017-03-08 |
CN106484085B true CN106484085B (en) | 2019-07-23 |
Family
ID=58236359
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510549225.7A Active CN106484085B (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
CN201910549634.5A Pending CN110275619A (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910549634.5A Pending CN110275619A (en) | 2015-08-31 | 2015-08-31 | The method and its head-mounted display of real-world object are shown in head-mounted display |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3281058A4 (en) |
KR (1) | KR20170026164A (en) |
CN (2) | CN106484085B (en) |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3606049A4 (en) * | 2017-03-22 | 2020-04-22 | Sony Corporation | Image processing device, method, and program |
CN107168515A (en) * | 2017-03-31 | 2017-09-15 | 北京奇艺世纪科技有限公司 | The localization method and device of handle in a kind of VR all-in-ones |
CN106896925A (en) * | 2017-04-14 | 2017-06-27 | 陈柳华 | The device that a kind of virtual reality is merged with real scene |
CN107222689B (en) * | 2017-05-18 | 2020-07-03 | 歌尔科技有限公司 | Real scene switching method and device based on VR (virtual reality) lens |
CN108960008B (en) * | 2017-05-22 | 2021-12-14 | 华为技术有限公司 | VR display method and device and VR equipment |
CN107229342A (en) * | 2017-06-30 | 2017-10-03 | 宇龙计算机通信科技(深圳)有限公司 | Document handling method and user equipment |
CN107577337A (en) * | 2017-07-25 | 2018-01-12 | 北京小鸟看看科技有限公司 | A kind of keyboard display method for wearing display device, device and wear display device |
US10627635B2 (en) * | 2017-08-02 | 2020-04-21 | Microsoft Technology Licensing, Llc | Transitioning into a VR environment and warning HMD users of real-world physical obstacles |
CN107422942A (en) * | 2017-08-15 | 2017-12-01 | 吴金河 | A kind of control system and method for immersion experience |
CN111448568B (en) * | 2017-09-29 | 2023-11-14 | 苹果公司 | Environment-based application presentation |
DE102017218215A1 (en) * | 2017-10-12 | 2019-04-18 | Audi Ag | A method of operating a head-mounted electronic display device and display system for displaying a virtual content |
KR102389185B1 (en) | 2017-10-17 | 2022-04-21 | 삼성전자주식회사 | Electronic device and method for executing function using input interface displayed via at least portion of content |
CN108169901A (en) * | 2017-12-27 | 2018-06-15 | 北京传嘉科技有限公司 | VR glasses |
CN108040247A (en) * | 2017-12-29 | 2018-05-15 | 湖南航天捷诚电子装备有限责任公司 | A kind of wear-type augmented reality display device and method |
CN108572723B (en) * | 2018-02-02 | 2021-01-29 | 陈尚语 | Carsickness prevention method and equipment |
KR102076647B1 (en) * | 2018-03-30 | 2020-02-12 | 데이터얼라이언스 주식회사 | IoT Device Control System And Method Using Virtual reality And Augmented Reality |
CN108519676B (en) * | 2018-04-09 | 2020-04-28 | 杭州瑞杰珑科技有限公司 | Head-wearing type vision-aiding device |
CN108764152B (en) * | 2018-05-29 | 2020-12-04 | 北京物灵智能科技有限公司 | Method and device for realizing interactive prompt based on picture matching and storage equipment |
CN108922115B (en) * | 2018-06-26 | 2020-12-18 | 联想(北京)有限公司 | Information processing method and electronic equipment |
WO2020051490A1 (en) * | 2018-09-07 | 2020-03-12 | Ocelot Laboratories Llc | Inserting imagery from a real environment into a virtual environment |
JP6739847B2 (en) * | 2018-09-12 | 2020-08-12 | 株式会社アルファコード | Image display control device and image display control program |
EP3671410B1 (en) * | 2018-12-19 | 2022-08-24 | Siemens Healthcare GmbH | Method and device to control a virtual reality display unit |
US11137908B2 (en) * | 2019-04-15 | 2021-10-05 | Apple Inc. | Keyboard operation with head-mounted device |
US20200327867A1 (en) * | 2019-04-15 | 2020-10-15 | XRSpace CO., LTD. | Head mounted display system capable of displaying a virtual scene and a map of a real environment in a picture-in-picture mode, related method and related non-transitory computer readable storage medium |
US10992926B2 (en) * | 2019-04-15 | 2021-04-27 | XRSpace CO., LTD. | Head mounted display system capable of displaying a virtual scene and a real scene in a picture-in-picture mode, related method and related non-transitory computer readable storage medium |
US11265487B2 (en) * | 2019-06-05 | 2022-03-01 | Mediatek Inc. | Camera view synthesis on head-mounted display for virtual reality and augmented reality |
CN110475103A (en) * | 2019-09-05 | 2019-11-19 | 上海临奇智能科技有限公司 | A kind of wear-type visual device |
CN111124112A (en) * | 2019-12-10 | 2020-05-08 | 北京一数科技有限公司 | Interactive display method and device for virtual interface and entity object |
CN111427447B (en) * | 2020-03-04 | 2023-08-29 | 青岛小鸟看看科技有限公司 | Virtual keyboard display method, head-mounted display device and system |
CN112462937B (en) * | 2020-11-23 | 2022-11-08 | 青岛小鸟看看科技有限公司 | Local perspective method and device of virtual reality equipment and virtual reality equipment |
CN112445341B (en) * | 2020-11-23 | 2022-11-08 | 青岛小鸟看看科技有限公司 | Keyboard perspective method and device of virtual reality equipment and virtual reality equipment |
CN112581054B (en) * | 2020-12-09 | 2023-08-29 | 珠海格力电器股份有限公司 | Material management method and material management device |
CN114827338A (en) * | 2021-01-29 | 2022-07-29 | 北京外号信息技术有限公司 | Method and electronic device for presenting virtual objects on a display medium of a device |
CN114035732A (en) * | 2021-11-04 | 2022-02-11 | 海南诺亦腾海洋科技研究院有限公司 | Method and device for controlling virtual experience content of VR head display equipment by one key |
WO2023130435A1 (en) * | 2022-01-10 | 2023-07-13 | 深圳市闪至科技有限公司 | Interaction method, head-mounted display device, and system and storage medium |
CN114972692B (en) * | 2022-05-12 | 2023-04-18 | 北京领为军融科技有限公司 | Target positioning method based on AI identification and mixed reality |
CN116744195B (en) * | 2023-08-10 | 2023-10-31 | 苏州清听声学科技有限公司 | Parametric array loudspeaker and directional deflection method thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103975268A (en) * | 2011-10-07 | 2014-08-06 | 谷歌公司 | Wearable computer with nearby object response |
WO2015092968A1 (en) * | 2013-12-19 | 2015-06-25 | Sony Corporation | Head-mounted display device and image display method |
WO2015111283A1 (en) * | 2014-01-23 | 2015-07-30 | ソニー株式会社 | Image display device and image display method |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6037882A (en) * | 1997-09-30 | 2000-03-14 | Levy; David H. | Method and apparatus for inputting data to an electronic system |
GB2376397A (en) * | 2001-06-04 | 2002-12-11 | Hewlett Packard Co | Virtual or augmented reality |
JP2005044102A (en) * | 2003-07-28 | 2005-02-17 | Canon Inc | Image reproduction method and device |
JP2009025918A (en) * | 2007-07-17 | 2009-02-05 | Canon Inc | Image processor and image processing method |
CN101893935B (en) * | 2010-07-14 | 2012-01-11 | 北京航空航天大学 | Cooperative construction method for enhancing realistic table-tennis system based on real rackets |
US8884984B2 (en) * | 2010-10-15 | 2014-11-11 | Microsoft Corporation | Fusing virtual content into real content |
JP2012173772A (en) * | 2011-02-17 | 2012-09-10 | Panasonic Corp | User interaction apparatus, user interaction method, user interaction program and integrated circuit |
EP3654147A1 (en) * | 2011-03-29 | 2020-05-20 | QUALCOMM Incorporated | System for the rendering of shared digital interfaces relative to each user's point of view |
JP2014515147A (en) * | 2011-06-21 | 2014-06-26 | エンパイア テクノロジー ディベロップメント エルエルシー | Gesture-based user interface for augmented reality |
JP5765133B2 (en) * | 2011-08-16 | 2015-08-19 | 富士通株式会社 | Input device, input control method, and input control program |
US8941560B2 (en) * | 2011-09-21 | 2015-01-27 | Google Inc. | Wearable computer with superimposed controls and instructions for external device |
CN103018905A (en) * | 2011-09-23 | 2013-04-03 | 奇想创造事业股份有限公司 | Head-mounted somatosensory manipulation display system and method thereof |
-
2015
- 2015-08-31 CN CN201510549225.7A patent/CN106484085B/en active Active
- 2015-08-31 CN CN201910549634.5A patent/CN110275619A/en active Pending
-
2016
- 2016-08-22 KR KR1020160106177A patent/KR20170026164A/en not_active Application Discontinuation
- 2016-08-31 EP EP16842274.9A patent/EP3281058A4/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103975268A (en) * | 2011-10-07 | 2014-08-06 | 谷歌公司 | Wearable computer with nearby object response |
WO2015092968A1 (en) * | 2013-12-19 | 2015-06-25 | Sony Corporation | Head-mounted display device and image display method |
WO2015111283A1 (en) * | 2014-01-23 | 2015-07-30 | ソニー株式会社 | Image display device and image display method |
Also Published As
Publication number | Publication date |
---|---|
KR20170026164A (en) | 2017-03-08 |
EP3281058A4 (en) | 2018-04-11 |
EP3281058A1 (en) | 2018-02-14 |
CN106484085A (en) | 2017-03-08 |
CN110275619A (en) | 2019-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106484085B (en) | The method and its head-mounted display of real-world object are shown in head-mounted display | |
US20170061696A1 (en) | Virtual reality display apparatus and display method thereof | |
US11810226B2 (en) | Systems and methods for utilizing a living entity as a marker for augmented reality content | |
CN111052043B (en) | Controlling external devices using a real-world interface | |
US11170580B2 (en) | Information processing device, information processing method, and recording medium | |
US10474336B2 (en) | Providing a user experience with virtual reality content and user-selected, real world objects | |
US10356398B2 (en) | Method for capturing virtual space and electronic device using the same | |
US10776618B2 (en) | Mobile terminal and control method therefor | |
CN106104650A (en) | Remote Device Control is carried out via gaze detection | |
US11481025B2 (en) | Display control apparatus, display apparatus, and display control method | |
US11151796B2 (en) | Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements | |
WO2015116388A2 (en) | Self-initiated change of appearance for subjects in video and images | |
KR20160128119A (en) | Mobile terminal and controlling metohd thereof | |
US11423627B2 (en) | Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements | |
CN107390863A (en) | Control method and device, electronic equipment, the storage medium of equipment | |
KR20220018561A (en) | Artificial Reality Systems with Personal Assistant Element for Gating User Interface Elements | |
US11195341B1 (en) | Augmented reality eyewear with 3D costumes | |
CN108027655A (en) | Information processing system, information processing equipment, control method and program | |
WO2015095507A1 (en) | Location-based system for sharing augmented reality content | |
WO2023064719A1 (en) | User interactions with remote devices | |
CN109144598A (en) | Electronics mask man-machine interaction method and system based on gesture | |
CN109448132B (en) | Display control method and device, electronic equipment and computer readable storage medium | |
CN111913560A (en) | Virtual content display method, device, system, terminal equipment and storage medium | |
US20230071828A1 (en) | Information processing apparatus, information processing system, and information processing method | |
CN118103799A (en) | User interaction with remote devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |