CN106484085A - Method and its head mounted display of real-world object is shown in head mounted display - Google Patents

Method and its head mounted display of real-world object is shown in head mounted display Download PDF

Info

Publication number
CN106484085A
CN106484085A CN201510549225.7A CN201510549225A CN106484085A CN 106484085 A CN106484085 A CN 106484085A CN 201510549225 A CN201510549225 A CN 201510549225A CN 106484085 A CN106484085 A CN 106484085A
Authority
CN
China
Prior art keywords
real
user
world object
image
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510549225.7A
Other languages
Chinese (zh)
Other versions
CN106484085B (en
Inventor
马赓宇
李炜明
金容圭
金度完
郑载润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecom R&D Center
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN201910549634.5A priority Critical patent/CN110275619A/en
Priority to CN201510549225.7A priority patent/CN106484085B/en
Priority to KR1020160106177A priority patent/KR20170026164A/en
Priority to PCT/KR2016/009711 priority patent/WO2017039308A1/en
Priority to US15/252,853 priority patent/US20170061696A1/en
Priority to EP16842274.9A priority patent/EP3281058A4/en
Publication of CN106484085A publication Critical patent/CN106484085A/en
Application granted granted Critical
Publication of CN106484085B publication Critical patent/CN106484085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera

Abstract

A kind of method showing real-world object in head mounted display and its head mounted display are provided.Methods described includes:(A) obtain the binocular view of the real-world object being located at around user;And (B) assumes the virtual field-of-view image of the binocular view being superimposed with real-world object to user.According to methods described and its head mounted display, user still is able to when wearing head mounted display perceive the actual three-dimensional space position of the real-world object of surrounding, the three-dimensional information such as attitude and association attributes, thus obtaining enhanced virtual visual field experience.

Description

Method and its head mounted display of real-world object is shown in head mounted display
Technical field
All things considered of the present invention is related to head mounted display technical field, more particularly, is related to one kind and exists The method of real-world object and the head mounted display of display real-world object is shown in head mounted display.
Background technology
With the development of electronic technology, head mounted display (HMD) is just becoming important display of future generation Equipment, has extensive practical application in various fields such as amusement, education, offices.When user wears head During head mounted displays, what eyes were observed is constructed by the display of head mounted display and optical system The virtual visual field, thus the situation of the real-world object in cannot observing around true environment, and this feelings Condition is often made troubles to user in actual applications.
Content of the invention
The exemplary embodiment of the present invention is to provide and a kind of shows real-world object in head mounted display Method and its head mounted display, to solve around user cannot observe when wearing head mounted display The situation of the real-world object in true environment and the problem made troubles.
According to the exemplary embodiment of the present invention, one kind is provided to show real-world object in head mounted display Method, including:(A) obtain the binocular view of the real-world object being located at around user;And (B) Assume the virtual field-of-view image of the binocular view being superimposed with real-world object to user.
Alternatively, in the case of determining needs to the real-world object around user's present user, execute step Suddenly (A) and/or step (B).
Alternatively, real-world object includes at least one among following item:Close to the body of user object, The application running on object that labeled object, user specify, head mounted display is currently needed for making Object needed for object, operational controls.
Alternatively, when at least one among following condition is satisfied, determines and need to present very to user Real object:Receive the user input that request assumes real-world object;Determine real-world object around user with The default object that presents matches;Needs is detected in the application interface of display in virtual field-of-view image makes Control with real-world object execution operation;Body part user is detected is close to real-world object;Detect The body part of user moves towards real-world object;Determine that the application running on head mounted display currently needs Real-world object to be used;Determine and reach the time that the real-world object around default and user interacts.
Alternatively, methods described also includes:(C) when at least one among following condition is satisfied, Assume the virtual field-of-view image of the binocular view not being superimposed with real-world object to user:Receive termination to present The user input of real-world object;Determine that the real-world object around user is mismatched with the default object that presents; Virtual field-of-view image is not detected by needing to execute operation using real-world object in the application interface of display Control;The body part determining user is away from real-world object;Determine that runs on head mounted display answers With currently not needing to use real-world object;Determine user during preset time period not using real-world object Execute operation;Determine that user can execute operation in the case of not watching real-world object.
Alternatively, step (A) includes:Captured including around user by single filming apparatus Real-world object image, according to capture Image Acquisition be located at user around real-world object binocular vision Figure.
Alternatively, in step (A), detect real-world object image from the image of capture, based on inspection The real-world object image measuring determines the real-world object image of another viewpoint, and based on the true thing detecting The real-world object image of body image and described another viewpoint is obtaining the binocular view of real-world object.
Alternatively, the real-world object image based on the real-world object image detecting and described another viewpoint Lai The step obtaining the binocular view of real-world object includes:Based on described single filming apparatus and user's binocular it Between position relationship, the real-world object image of the real-world object image detecting and described another viewpoint is entered Row view-point correction is obtaining the binocular view of real-world object.
Alternatively, step (A) includes:Captured including true around user by filming apparatus The image of real object, detects real-world object image, based on the real-world object detecting from the image of capture Obtaining the binocular view of real-world object, wherein, described filming apparatus include depth camera to image, or Person, described filming apparatus include at least two single view photographic head.
Alternatively, the step obtaining the binocular view of real-world object based on the real-world object image detecting Including:Based on the position relationship between described filming apparatus and user's binocular, to the real-world object detecting Image carries out view-point correction to obtain the binocular view of real-world object.
Alternatively, in the real-world object image of the desired display object that can't detect among real-world object, Expand shooting visual angle and carry out the image that recapture includes desired display object, or, point out user to turn to the phase Hope that the image that recapture includes desired display object is carried out in the orientation that display object is located.
Alternatively, in step (B), obtain the virtual scene image of reflection virtual scene, and pass through The binocular view of real-world object and virtual scene image are overlapped producing virtual field-of-view image.
Alternatively, in step (B), the dummy object in virtual scene image is existed with real-world object Exist on three dimensions in the case of blocking, dummy object is zoomed in and out and/or mobile.
Alternatively, in step (B), real-world object is shown as translucent, contour line or 3D grid At least one among line.
Alternatively, in step (B), according to one of mode shown below by the binocular vision of real-world object Figure is added to the virtual field-of-view image of head mounted display:Only show that the binocular view of real-world object does not show The binocular view showing virtual scene image, only showing virtual scene image and do not show real-world object, general are true The binocular view of real object and virtual scene image spatially merge display, by the binocular vision of real-world object Figure is shown on virtual scene image, by virtual scene image in a form of picture-in-picture in a form of picture-in-picture It is shown on the binocular view of real-world object.
Alternatively, in step (B), operation according to user come add and/or delete be added to virtual The real-world object of field-of-view image.
Alternatively, methods described also includes:(D) obtain the display project with regard to internet of things equipment, and will The display project obtaining is added to virtual field-of-view image, wherein, described display project expression thing networked devices Following item at least one of:Operation interface, mode of operation, announcement information, configured information.
Alternatively, the display items with regard to internet of things equipment are obtained by least one among following process Mesh:The image of the internet of things equipment that capture is located in the true visual field of user, and set from the Internet of Things capturing Standby image zooming-out is with regard to the display project of internet of things equipment;From in the true visual field of user and/or true Internet of things equipment outside the real visual field receives the display project with regard to internet of things equipment;Sensing is located at the true of user Internet of things equipment outside the real visual field with respect to head mounted display position as configured information.
Alternatively, methods described also includes:(E) operation of display items purpose is directed to according to user and carrys out remote control thing Networked devices execution is corresponding to be processed.
In accordance with an alternative illustrative embodiment of the present invention, a kind of wear-type of display real-world object is provided to show Device, including:Real-world object view acquisition device, obtains the binocular vision of the real-world object being located at around user Figure;And display device, the virtual field-of-view image of the binocular view being superimposed with real-world object is assumed to user.
Alternatively, described head mounted display also includes:Display control unit, it is determined whether need to Real-world object around the present user of family, wherein, determining in display control unit needs to assume use to user In the case of real-world object around family, real-world object view acquisition device obtains true around user The binocular view of real object and/or display device assume the binocular view being superimposed with real-world object to user Virtual field-of-view image.
Alternatively, real-world object includes at least one among following item:Close to the body of user object, The application running on object that labeled object, user specify, head mounted display is currently needed for making Object needed for object, operational controls.
Alternatively, when at least one among following condition is satisfied, display control unit determines to be needed Assume real-world object to user:Receive the user input that request assumes real-world object;Determine around user Real-world object match with the default object that presents;In the application interface of display in virtual field-of-view image The control needing using real-world object execution operation is detected;Body part user is detected is close to true Object;Body part user is detected moves towards real-world object;Determine and run on head mounted display Application be currently needed for using real-world object;Determine that reaching default and around user real-world object is carried out The time of interaction.
Alternatively, when at least one among following condition is satisfied, display control unit determination is not required to To assume real-world object to user, display device assumes the binocular view not being superimposed with real-world object to user Virtual field-of-view image:Receive the user input terminating assuming real-world object;Determine true around user Real object is mismatched with the default object that presents;Virtual field-of-view image does not have in the application interface of display The control needing using real-world object execution operation is detected;The body part determining user is away from true thing Body;Determine that the application running on head mounted display does not currently need to use real-world object;Determine that user exists Operation is not executed using real-world object during preset time period;Determine that user can not watch true thing Operation is executed in the case of body.
Alternatively, described head mounted display also includes:Image capture apparatus, by single filming apparatus To capture the image including the real-world object around user;Binocular view generator, according to capture Image Acquisition be located at user around real-world object binocular view.
Alternatively, binocular view generator detects real-world object image from the image of capture, based on inspection The real-world object image measuring determines the real-world object image of another viewpoint, and based on the true thing detecting The real-world object image of body image and described another viewpoint is obtaining the binocular view of real-world object.
Alternatively, binocular view generator is based on the position between described single filming apparatus and user's binocular Put relation, viewpoint is carried out to the real-world object image of the real-world object image detecting and described another viewpoint Correct and to obtain the binocular view of real-world object.
Alternatively, described head mounted display also includes:Image capture apparatus, are caught by filming apparatus Obtain the image including the real-world object around user, wherein, described filming apparatus include depth camera Head, or, described filming apparatus include at least two single view photographic head;Binocular view generator, Detect real-world object image from the image of capture, obtained truly based on the real-world object image detecting The binocular view of object.
Alternatively, binocular view generator is closed based on the position between described filming apparatus and user's binocular System, carries out, to the real-world object image detecting, the binocular view that view-point correction to obtain real-world object.
Alternatively, desired display object among real-world object is can't detect in binocular view generator During real-world object image, image capture apparatus expansion shooting visual angle is carried out recapture and is included desired display object Image, or, image capture apparatus point out user turn to desired display object be located orientation come again Capture includes the image of desired display object.
Alternatively, described head mounted display also includes:Virtual field-of-view image generator, produces superposition There is the virtual field-of-view image of the binocular view of real-world object.
Alternatively, described head mounted display also includes:Virtual scene image acquiring device, obtains reflection The virtual scene image of virtual scene, wherein, virtual field-of-view image generator passes through real-world object Binocular view and virtual scene image are overlapped producing the virtual of the binocular view being superimposed with real-world object Field-of-view image.
Alternatively, dummy object in virtual scene image for the virtual field-of-view image generator and true thing In the case that body exists on three dimensions and blocks, dummy object is zoomed in and out and/or mobile.
Alternatively, real-world object is shown as translucent, contour line or 3D by virtual field-of-view image generator At least one among grid lines.
Alternatively, virtual field-of-view image generator is double by real-world object according to one of mode shown below Eye diagram is overlapped with virtual scene image:Only show that the binocular view of real-world object does not show virtual Scene image, the binocular view only showing virtual scene image and not showing real-world object, by real-world object Binocular view and virtual scene image spatially merge display, by the binocular view of real-world object to draw The form of middle picture is shown on virtual scene image, is shown in virtual scene image in a form of picture-in-picture On the binocular view of real-world object.
Alternatively, virtual field-of-view image generator adds and/or deletes according to the operation of user and is added to The real-world object of virtual field-of-view image.
Alternatively, described head mounted display also includes:Display project acquisition device, obtains with regard to Internet of Things The display project of net equipment, and the display project of acquisition is added to virtual field-of-view image, wherein, described At least one of the following item of display project expression thing networked devices:Operation interface, mode of operation, Announcement information, configured information.
Alternatively, display project acquisition device is obtained with regard to thing by least one among following process The display project of networked devices:The image of the internet of things equipment that capture is located in the true visual field of user, and From capture internet of things equipment image zooming-out with regard to internet of things equipment display project;From positioned at user's Internet of things equipment in the true visual field and/or outside the true visual field receives the display project with regard to internet of things equipment; The position sensing the internet of things equipment being located at outside the true visual field of user with respect to head mounted display is as finger Show information.
Alternatively, described head mounted display also includes:Actuation means, are directed to display project according to user Operation come that the execution of remote control internet of things equipment is corresponding to be processed.
In the method showing real-world object in head mounted display according to an exemplary embodiment of the present invention and The binocular view of the real-world object being superimposed with around user in its head mounted display, can be assumed to user Virtual field-of-view image, thus user still is able to when wearing head mounted display perceive the true of surrounding The actual three-dimensional space position of object, the three-dimensional information such as attitude and association attributes, are easy to user and surrounding Real-world object interacts, and completes the necessary behavior needing visual pattern feedback.
Additionally, the embodiment of the present invention can interpolate that needs display to the user that the binocular of the real-world object of surrounding The opportunity of view;Additionally it is possible to display to the user that week in virtual field-of-view image by way of being suitable for The binocular view of the real-world object enclosing.
The other aspect of present general inventive concept and/or advantage will partly be illustrated in following description, also Some be will be apparent from by description, or can learn through the enforcement of present general inventive concept.
Brief description
By the description carrying out with reference to the accompanying drawing being exemplarily illustrated embodiment, the exemplary reality of the present invention Apply the above and other purpose of example and feature will become apparent, wherein:
Fig. 1 illustrates according to an exemplary embodiment of the present invention to show real-world object in head mounted display The flow chart of method;
Fig. 2 illustrates that display in head mounted display in accordance with an alternative illustrative embodiment of the present invention is true The flow chart of the method for object;
Fig. 3 illustrates that display in head mounted display in accordance with an alternative illustrative embodiment of the present invention is true The flow chart of the method for object;
Fig. 4 illustrates that display in head mounted display in accordance with an alternative illustrative embodiment of the present invention is true The flow chart of the method for object;
Fig. 5 illustrates according to an exemplary embodiment of the present invention to show physical keyboard in head mounted display The flow chart of method;
Fig. 6 illustrates the connection side of head mounted display according to an exemplary embodiment of the present invention and physical keyboard The example of formula;
Fig. 7 illustrates according to an exemplary embodiment of the present invention to need to assume the example of physical keyboard to user;
Fig. 8 illustrates that prompting user according to an exemplary embodiment of the present invention turns to showing of the orientation that keyboard is located Example;
Fig. 9 illustrates the binocular vision obtaining keyboard based on the image of capture according to an exemplary embodiment of the present invention The example of figure;
Figure 10 illustrates that generation according to an exemplary embodiment of the present invention is superimposed with the void of the binocular view of keyboard Intend the example of field-of-view image;
Figure 11 illustrates according to an exemplary embodiment of the present invention to assume the binocular vision being superimposed with food to user The example of the virtual field-of-view image of figure;
Figure 12 illustrates the side showing food in head mounted display according to an exemplary embodiment of the present invention The flow chart of method;
Figure 13 illustrates the example of operated button according to an exemplary embodiment of the present invention;
Figure 14 illustrates the example of gesture of finding a view according to an exemplary embodiment of the present invention;
Figure 15 illustrates according to an exemplary embodiment of the present invention true by detecting that telecommand input operation comes Determining user needs the example of feed;
Figure 16 illustrate according to an exemplary embodiment of the present invention by virtual mouse determine need to user be in The example of existing object;
Figure 17 illustrates the example of display real-world object according to an exemplary embodiment of the present invention;
Figure 18 illustrates that deletion according to an exemplary embodiment of the present invention is added to the true of virtual field-of-view image Object;
Figure 19 illustrates the example of the binocular view of display real-world object according to an exemplary embodiment of the present invention;
Figure 20 illustrate according to an exemplary embodiment of the present invention in head mounted display display it may happen that The flow chart of the method for object of collision;
Figure 21 illustrates according to an exemplary embodiment of the present invention to show with regard to Internet of Things in head mounted display The flow chart of the display items purpose method of net equipment;
Figure 22 illustrates according to an exemplary embodiment of the present invention to assume to user that to be superimposed with display items purpose empty Intend the example of field-of-view image;
Figure 23 illustrate according to an exemplary embodiment of the present invention present to user be superimposed with mobile communication terminal The virtual field-of-view image of operation interface example;
Figure 24 illustrate according to an exemplary embodiment of the present invention present to user be superimposed with mobile communication terminal The virtual field-of-view image of incoming information example;
Figure 25 illustrate according to an exemplary embodiment of the present invention present to user be superimposed with mobile communication terminal The example of the virtual field-of-view image of the short message receiving;
Figure 26 illustrates the head mounted display of display real-world object according to an exemplary embodiment of the present invention Block diagram;
Figure 27 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device;
Figure 28 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device;
Figure 29 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device;
Figure 30 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device.
Specific embodiment
Reference will now be made in detail embodiments of the invention, the example of described embodiment is shown in the drawings, wherein, Identical label refers to identical part all the time.Hereinafter by referring to accompanying drawing, described embodiment will be described, So that the explanation present invention.
Hereinafter, will describe in conjunction with Fig. 1 to Fig. 4 and according to an exemplary embodiment of the present invention show in wear-type Show the method showing real-world object in device.Methods described can be completed by head mounted display, also can pass through Computer program is realizing.For example, methods described can by be arranged in head mounted display for showing Show the application of real-world object to execute, or the function by realizing in the operating system of head mounted display Program is executing.Additionally, alternately, the part steps in methods described can be shown by wear-type Completing, another part step can have been cooperated device by the miscellaneous equipment outside head mounted display or device Become, the invention is not limited in this regard.
Fig. 1 illustrates according to an exemplary embodiment of the present invention to show real-world object in head mounted display The flow chart of method.
As shown in figure 1, in step S10, obtaining the binocular view of the real-world object being located at around user.
Here, binocular view is the binocular view for the human eye of user wearing head mounted display, leads to Cross the binocular view of real-world object, user's brain can obtain the depth information of real-world object, and then experience Actual three-dimensional space position and three-dimensional attitude to real-world object, i.e. the binocular by real-world object for the user The three-dimensional space position of the real-world object that view is experienced and three-dimensional attitude pass through eye-observation simultaneously with user The three-dimensional space position of the real-world object experienced is consistent with three-dimensional attitude.
As an example, real-world object can be according to thingness or presenting of being pre-set using scene Object, it may include at least one among following item:Object, labeled close to the body of user Object that the application running on object that object, user specify, head mounted display is currently needed for using, Object needed for operational controls.
As an example, can be captured including the real-world object around user by single filming apparatus Image, and it is located at the binocular view of the real-world object around user according to the Image Acquisition of capture.
Here, described single filming apparatus can be the common filming apparatus only with a visual angle, due to The image being captured does not have depth information, therefore, correspondingly, can detect true from the image of capture Subject image, determines the real-world object image of another viewpoint based on the real-world object image detecting, and base Real-world object image in the real-world object image detecting and described another viewpoint to obtain real-world object Binocular view.
Here, real-world object image is the image of real-world object region in captured image.For example, Real-world object image can be detected using existing various image-recognizing methods from the image being captured.
As an example, can be based on the position relationship between described single filming apparatus and user's binocular, to inspection The real-world object image measuring and the real-world object image of described another viewpoint carry out view-point correction to obtain very The binocular view of real object.
As another example, the binocular view of real-world object can be obtained based on the stereo-picture shooting.Tool Body is got on very well, and can capture the image including the real-world object around user by filming apparatus, from catching Detect real-world object image in the image obtaining, real-world object is obtained based on the real-world object image detecting Binocular view, wherein, described filming apparatus include depth camera, or, described filming apparatus bag Include at least two single view photographic head.Here, described at least two single view photographic head can have coincidence Visual angle, thus passing through depth camera, or at least two single view photographic head can capture with depth The stereo-picture of information.
As an example, can be based on the position relationship between described filming apparatus and user's binocular, to detecting Real-world object image carry out view-point correction to obtain the binocular view of real-world object.
It should be understood that as an example, above-mentioned single filming apparatus, depth camera or single view photographic head Can be the built in camera of head mounted display it is also possible to be mounted in attached on head mounted display Connect filming apparatus, for example, it may be the photographic head that miscellaneous equipment (for example, smart mobile phone) has, this Invention is not restricted to this.
Preferably, in the real-world object image of the desired display object that can't detect among real-world object, Shooting visual angle can be expanded and carry out the image that recapture includes desired display object.
Or, in the real-world object image of the desired display object that can't detect among real-world object, can The figure that recapture includes desired display object is carried out in the orientation that prompting user turns to desired display object place Picture.For example, user can be pointed out by image, word, audio frequency, video etc..As an example, can be by Three-dimensional space position based on the real-world object prestoring or the real-world object that obtains via positioner Three-dimensional space position come to point out user turn to desired display object be located orientation come recapture include expect The image of display object.
In step S20, assume the virtual field-of-view image of the binocular view being superimposed with real-world object to user. By the way, user can watch the binocular view of real-world object (i.e., in virtual field-of-view image Strengthen virtual reality), experience the actual three-dimensional space position of real-world object and three-dimensional attitude, sentence exactly Disconnected real-world object and the three-dimensional attitude of the position relationship of itself and real-world object, complete necessary needs and regard Feel the behavior of image feedback.
Here it should be appreciated that by the display device being integrated on head mounted display to user can be in Now be superimposed with the virtual field-of-view image of the binocular view of real-world object, also can by with head mounted display outside Other display devices connecing to present to user, the invention is not limited in this regard.
As an example, the virtual scene image of reflection virtual scene in step S20, can be obtained, and lead to Cross to be overlapped producing by the binocular view of real-world object and virtual scene image and be superimposed with real-world object The virtual field-of-view image of binocular view.That is, the virtual field-of-view image presenting to user is spatially to merge The virtual field-of-view image of the binocular view of real-world object and virtual scene image, thus user can be just Normal head mounted display virtual scene experience under, complete necessary need visual pattern feedback with true The behavior that real object interacts.
Here, the corresponding needs of the virtual scene image i.e. application that run current with head mounted display with The image of the reflection virtual scene presenting to user in the virtual visual field at family.For example, if wear-type shows The current application running of device is the virtual somatic sensation television game such as boxing, golf, then virtual scene image is needs The image of the reflection virtual game scene to present to user in the virtual visual field;If head mounted display The application of current operation is the application for watching film, then virtual scene image is needs to regard virtual The image of the reflection virtual theater screen scene that Yezhong presents to user.
As an example, according to one of mode shown below, the binocular view of real-world object can be added to and wear The virtual field-of-view image of formula display:Only show that the binocular view of real-world object does not show virtual scene figure As, the binocular view that only shows virtual scene image and do not show real-world object, by the binocular of real-world object View and virtual scene image spatially merge display, by the binocular view of real-world object with picture-in-picture Form is shown on virtual scene image, virtual scene image is shown in true thing in a form of picture-in-picture On the binocular view of body.
As an example, real-world object can be shown as among translucent, contour line or 3D grid lines extremely One item missing.For example, can deposit on three dimensions with real-world object the dummy object in virtual scene image In the case of blocking, real-world object is shown as among translucent, contour line or 3D grid lines extremely One item missing, to reduce blocking to the dummy object in virtual scene image, reduces to viewing virtual scene The impact of image.
Additionally, as an example, the dummy object in virtual scene image is with real-world object in three dimensions In the case that upper presence is blocked, also dummy object can be zoomed in and out and/or mobile.For example, can be in void Intend scene image in real-world object exist on three dimensions the dummy object blocking zoom in and out and/or Mobile, also in the case that presence is blocked, all dummy objects in virtual scene image can be contracted Put and/or mobile.It should be understood that head mounted display can virtual object in automatic decision virtual scene image Body and circumstance of occlusion on three dimensions for the real-world object, and correspondingly dummy object is zoomed in and out and/or Mobile.Additionally, the operation also dependent on user dummy object to be zoomed in and out and/or mobile.
Moreover it is preferred that being added according to the operation of user and/or deleting the virtual field-of-view image that is added to Real-world object.That is, user can be true by do not show in virtual field-of-view image according to the demand of oneself The binocular view of object is also added in virtual field-of-view image, and/or delete from virtual field-of-view image unnecessary , do not need the binocular view of real-world object that presents, to reduce the impact to viewing virtual scene image.
Preferably, the binocular view being superimposed with real-world object can just be assumed in appropriate circumstances to user Virtual field-of-view image.Fig. 2 illustrate in accordance with an alternative illustrative embodiment of the present invention in head mounted display The flow chart of the method for middle display real-world object.In addition to step S10 shown in except Fig. 1 and step S20, The method showing real-world object in head mounted display shown in Fig. 2 may also include step S30.Step S10 and step S20 can refer to aforesaid specific embodiment to realize, and will not be described here.
In step S30, it is determined whether need to the real-world object around user's present user, wherein, exist Determine and need in the case of the real-world object around user's present user, execution step S10 and step S20.
Here it should be appreciated that above-mentioned steps are not limited to the sequential shown in Fig. 2, but can basis Need or the design of product carries out suitable adjustment.For example, sustainable execution step S10 is obtained with real-time The binocular view of the real-world object around user, and only determine that needs to user are in step S30 In the case of real-world object around existing user, ability execution step S20.
As an example, in the scene that the real-world object detecting around needs and user interacts, can Determine and need to the real-world object around user's present user.For example, it is desired to user around true thing The scene that body interacts may include following item at least one of:Need defeated by real-world object execution Enter operation scene (for example, it is desired to executing the scene of input operation using keyboard, mouse, handle etc.), Need to avoid the scene (for example, it is desired to hiding the scene of close people) colliding with real-world object, need to use Handss capture the scene (for example, it is desired to the scene eaten or drink water) of real-world object.
According to the exemplary embodiment of the present invention, can be determined according to various situations present real-world object when Machine, as an example, when at least one among following condition is satisfied it may be determined that need to user be in Real-world object around existing user:Receive the user input that request assumes real-world object;Determine user's week The real-world object enclosing is matched with the default object that presents;The application interface of display in virtual field-of-view image In detect need using real-world object execution operation control;Body part user is detected is close to true Real object;Body part user is detected moves towards real-world object;Determine fortune on head mounted display The application of row is currently needed for using real-world object;Determine that reaching default and around user real-world object enters The time of row interaction.
By said method, can determine whether user sees the real-world object of surrounding in which moment needs, that is, need To assume the opportunity of the binocular view of real-world object of surrounding to user, be easy to user and know surrounding in time The actual three-dimensional space position of real-world object and three-dimensional attitude.
Preferably, can terminate presenting the binocular view being superimposed with real-world object in appropriate circumstances to user Virtual field-of-view image.Fig. 3 illustrates showing in wear-type in accordance with an alternative illustrative embodiment of the present invention The flow chart showing the method for real-world object in device.In addition to step S10 shown in except Fig. 1 and step S20, The method showing real-world object in head mounted display shown in Fig. 3 may also include step S40 and S50. Step S10 and step S20 can refer to aforesaid specific embodiment to realize, and will not be described here.
In step S40, it is determined whether need to continue to the real-world object around user's present user, wherein, In the case that determination needs not continue to the real-world object around user's present user, execution step S50.
As an example, at the end of the scene that the real-world object detecting around needs and user interacts, Can determine that the real-world object needing not continue to around user's present user.
As an example, when at least one among following condition is satisfied, determine need not continue to Real-world object around the present user of family:Receive the user input terminating assuming real-world object;Determine and use Real-world object around family is mismatched with the default object that presents;The application of display in virtual field-of-view image It is not detected by needing the control using real-world object execution operation in interface;Determine the body part of user Away from real-world object;Determine that the application running on head mounted display does not currently need to use real-world object; Determine that user does not execute operation using real-world object during preset time period;Determine that user can be not Operation is executed in the case of viewing real-world object.
Assume real-world object with regard to request and/or terminate assuming the user input of real-world object, as an example, Can be realized by least one among following item:Contact action, physical button operation, telecommand Input operation, acoustic control operation, gesture motion, headwork, body action, sight line action, touching are dynamic Work, gripping action.
Assume object with regard to default, can be the thing that the needs of default setting in head mounted display present The object that the body or user needs according to set by the demand of oneself present.For example, default Assuming object can be food, tableware, the handss of user, the object posting specific label, people etc..
In step S50, assume the virtual field-of-view image of the binocular view not being superimposed with real-world object to user.
By said method, can in the case of not needing to the real-world object around user's present user, The virtual field-of-view image assuming the binocular view not being superimposed with real-world object to user (to user is in only that is, Existing virtual scene image), not affect user's viewing virtual scene image.
Fig. 4 illustrates that display in head mounted display in accordance with an alternative illustrative embodiment of the present invention is true The flow chart of the method for object.In addition to step S10 shown in except Fig. 1 and step S20, shown in Fig. 4 Show that the method for real-world object may also include step S60 in head mounted display.Step S10 and step S20 can refer to aforesaid specific embodiment to realize, and will not be described here.
In step S60, obtain the display project with regard to internet of things equipment, and the display project of acquisition is folded It is added to virtual field-of-view image, wherein, among the following item of described display project expression thing networked devices extremely Few one:Operation interface, mode of operation, announcement information, configured information.
Here, announcement information can be the information such as word, audio frequency, video, image.For example, if thing Networked devices are communication equipments, then notification message can be the Word message with regard to missed call;If thing Networked devices are access control equipments, then notification message can be captured monitoring image.
Configured information is indicated for user and finds the word of internet of things equipment, audio frequency, video, image etc. Information.For example, configured information can be arrow indicator, and user can obtain Internet of Things according to the sensing of arrow Net equipment is with respect to the orientation of user;Configured information may also mean that shows that user is relative with internet of things equipment The word (for example, communication equipment is at your two meters of left front) of position.
As an example, showing with regard to internet of things equipment can be obtained by least one among following process Aspect mesh:The image of the internet of things equipment that capture is located in the true visual field of user, and the Internet of Things from capture The image zooming-out of net equipment is with regard to the display project of internet of things equipment;From in the true visual field of user and/ Or the internet of things equipment outside the true visual field receives the display project with regard to internet of things equipment;Sensing is located at user The true visual field outside internet of things equipment with respect to head mounted display position as configured information.
Additionally, in accordance with an alternative illustrative embodiment of the present invention show true thing in head mounted display The method of body may also include:The operation of display items purpose is directed to according to user and carrys out remote control internet of things equipment execution phase The process answered.
By said method, user can know the internet of things equipment of surrounding when using head mounted display Relevant information, additionally, also the execution of remote-controlled internet of things equipment is corresponding processing.
Hereinafter, the specific implementation of said method will be described with reference to specific application scenarios it should manage Solution, following specific implementations are not limited to an other application scenarios, can apply completely in different fields Jing Zhong, with regard to use can also be merged each other based on the specific implementation described by different scenes, This is not restricted.
Hereinafter, the application showing physical keyboard in head mounted display will be described in reference to Fig. 5 to Figure 10 Scene.It should be appreciated that although following application scenarios are taking physical keyboard as a example, but it is equally applicable to similar Interactive device, for example, mouse, remote control, game paddle etc..
Fig. 5 illustrates according to an exemplary embodiment of the present invention to show physical keyboard in head mounted display The flow chart of method.Here, the Overall Steps in methods described can be executed by head mounted display, In this case, head mounted display can be connected with physical keyboard by way of wired or wireless.
Additionally, the method shown in Fig. 5 also can execute a part of step by head mounted display, by wear-type Processor outside display is executing another part step.Fig. 6 illustrates according to the exemplary enforcement of the present invention The example of the connected mode of the head mounted display of example and physical keyboard.As shown in fig. 6, wear-type shows Device can be connected with processor with physical keyboard by way of wired or wireless, i.e. in methods described A part of step to be executed by head mounted display, outside another part step is by head mounted display Manage device to execute.
Particularly, as shown in figure 5, in step S101, it is determined whether need to user's present user week The physical keyboard enclosing.
Can be determined the need for the physical keyboard around user's present user according to implementation below. As an example, detecting in the application interface of display in virtual field-of-view image needs to hold using real-world object It may be determined that needing to the physical keyboard around user's present user in the case of the control of row operation.For example, Can read the attribute information of all controls in the application interface of display in virtual field-of-view image, and according to The attribute information reading, to determine whether the control of use interactive device execution operation in need, has in determination It may be determined that needing to user's present user week in the case of needing the control operating by interactive device execution The interactive device enclosing, and physical keyboard can be set to the interactive device that needs present to user.Fig. 7 shows Going out according to an exemplary embodiment of the present invention needs to assume the example of physical keyboard to user.As shown in fig. 7, If detect in the application interface of display in virtual field-of-view image need to execute using interactive device defeated Enter the control of operation, the prompting as shown in (a) in Fig. 7 inputs the dialog box of text message, in Fig. 7 The prompting shown in (b) click on dialog box to start, then can determine that needs to around user's present user Physical keyboard so that user execute corresponding input operation.
As another example, the application running on determining head mounted display is currently needed for using secondary or physical bond It may be determined that needing to the physical keyboard around user's present user in the case of disk.Here, can according to should With determining the interactive device being currently needed for using.For example, application head mounted display running is virtual Game application, is currently needed for controlling game role by keyboard, then can determine that needs assume use to user Physical keyboard around family.And, the needs determining can be added to pre- to the physical keyboard that user presents If present in object list.
As another example, in the case of receiving the user input asking to assume physical keyboard, can be true Fixed needs assume physical keyboard to user.Here, user input can be contact action, physical button behaviour Work, telecommand input operation, acoustic control operation, gesture motion, headwork, body action, sight line Action, touch action, gripping action etc..
With regard to physical button operation, contact action and telecommand input operation, can be that wear-type is shown Show the operation of physical button on device or the operation to the button on touch screen, can also be right Other are capable of the input operation of the equipment (for example, handle etc.) of remote control head mounted display.For example, when Detect physical button a message event when it may be determined that needing to assume physical keyboard to user;Work as detection To physical button b message event when it may be determined that not needing to assume physical keyboard to user.Additionally, Can be by switching the need of assuming physical keyboard to the operation of Same Physical button.
With regard to gesture operation, can be detected by filming apparatus needs to present the user of physical keyboard for instruction Gesture needs to assume physical keyboard to user to determine.For example, when detect for instruction need to assume thing It may be determined that needing to assume physical keyboard to user during gesture a of reason keyboard;When detect for instruction not Need during gesture b assuming physical keyboard it may be determined that not needing to assume physical keyboard to user.Additionally, Also can be switched the need of assuming physical keyboard by same gesture operation.
With regard to headwork, body action, sight line action, can be detected by filming apparatus is needed for instruction User's attitude of physical keyboard to be presented needs to assume physical keyboard to user to determine, for example, head revolves Turn or direction of visual lines.For example, when sight line user is detected meets condition a it may be determined that needs to Family assumes physical keyboard, and (for example, the sight line of user is seen to the prompting input text envelope in virtual field-of-view image The dialog box of breath);When sight line user is detected meets condition b, (for example, the sight line of user is seen to void Dummy object in plan field-of-view image, virtual movie screen) it may be determined that not needing to assume secondary or physical bond to user Disk.Wherein, condition a and condition b can be complementary condition or not complementary condition.
The sound of user with regard to acoustic control operation, can be gathered by mike, and by speech recognition technology Lai The phonetic order of identifying user, it is determined whether need to assume physical keyboard to user.
As another example, in the case of detecting in one's hands being placed on above physical keyboard it may be determined that need to User assumes physical keyboard.For example, can detect whether there are handss (for example, around user by filming apparatus Skin color detection method can be adopted), whether have keyboard, handss whether on keyboard, when above three condition is all full It may be determined that needing to assume physical keyboard to user when sufficient;It is discontented with when there being a condition in above three condition It may be determined that not needing to assume physical keyboard to user when sufficient.Whether can simultaneously or sequentially detect around user There are handss, whether have keyboard, and do not limit the priority whether having handss around detection user, whether having keyboard Sequentially, in the case of detecting and having handss and keyboard around user, further whether detection handss are on keyboard.
In the case that step S101 determines that needs assume keyboard to user, execution step S102.In step Rapid S102, captures the image around user by filming apparatus, and detects keyboard layout from the image of capture Picture.
As an example, characteristic point can be detected in the image of capture, and by by the characteristic point detecting and The keyboard image Feature Points Matching prestoring is detecting keyboard image.For example, can be according to the image of capture In the image coordinate of characteristic point that matches with the keyboard image characteristic point that prestores and the key storing in advance The coordinate of disk image characteristic point, determines the image coordinate of four angle points of keyboard in the image of capture, then Based on a determination that four angle points the profile to determine keyboard in the image of capture for the image coordinate, and then determine Keyboard image in the image of capture.Here, characteristic point can be Scale invariant features transform (Scale-invariant Feature Transform, SIFT) characteristic point or people in the art Other characteristic points well known to member.Correspondingly, can be calculated using same or similar method and be captured The image coordinate of the profile point (that is, the point on the profile of object) of arbitrary objects in image.Additionally, should This understanding, also can carry out to detect keyboard image by other means from the image of capture.
Below, illustrate the calculating of the profile of keyboard in the image of capture taking four angle points of keyboard as a example Process.The coordinate of the keyboard image characteristic point that hypothesis prestores is Pworld(local being located in keyboard is sat Under mark system), the coordinate of the top left corner apex on the profile of the keyboard image prestoring is Pcorner(in key Under the local coordinate system that disk is located), match with the keyboard image characteristic point prestoring in the image of capture Characteristic point coordinate be Pimage, keyboard be located local coordinate be tied to filming apparatus coordinate system conversion Represented with R and t, wherein, R represents rotation, t represents displacement, and the projection matrix of filming apparatus is K, Then have
Pimage=K* (R*Pworld+ t) (1),
By the characteristic point in the image of the coordinate of the keyboard image prestoring characteristic point and the capture matching Coordinate substitute into formula (1) respectively, can solve and obtain R and t, then can obtain keyboard in the image capturing The coordinate of top left corner apex is K* (R*Pcorner+ t), correspondingly, can obtain capture image in keyboard its The coordinate of his three angle points, line can obtain the profile of keyboard in the image capturing.Correspondingly, can count Calculate the profile point of any object coordinate in the image of capture, thus obtaining the image in capture for this object On projection profile.
If additionally, being not detected by keyboard image from the image of capture, shooting visual angle (example can be expanded As using wide viewing angle filming apparatus) carry out the image around recapture user and therefrom detect keyboard image, Or, point out the orientation that user turns to keyboard place to carry out the image that recapture includes keyboard, for example, can Using detecting in the image previously capturing and the positional information of keyboard of memory storage or wireless location side Formula (for example, Bluetooth transmission, RF identification (Radio Frequency Identification, RFID) mark The modes such as label, infrared ray, ultrasound wave, magnetic field) come to determine keyboard be located orientation.Fig. 8 illustrates basis The prompting user of exemplary embodiment of the present turns to the example in the orientation that keyboard is located.As shown in figure 8, Image (for example, arrow) can be indicated with instruction user in the direction in virtual field-of-view image by Direction of superposition Field of fixation direction.
In step S103, based on the position relationship between described filming apparatus and user's binocular, to detecting Keyboard image carry out view-point correction to obtain the binocular view of keyboard.For example, the seat according to filming apparatus Rotation between mark system and the coordinate system of user's binocular and the translation keyboard image that detects of relation pair carry out list Should convert, thus obtaining the binocular view of keyboard.The coordinate system of filming apparatus and the coordinate system of user's binocular Between rotation and translation relation can be demarcated by offline mode, also can read and carried using producer For nominal data.
As an example, if the filming apparatus being used in step S102 are single single view filming apparatus, The keyboard image of another viewpoint then can be determined based on the keyboard image detecting, be then based on described single bat Take the photograph the position relationship between device and user's binocular, to the keyboard image detecting and described another viewpoint Keyboard image carries out view-point correction to obtain the binocular view of keyboard.
Here, the filming apparatus by being used are single single view filming apparatus, so the key detecting Disk image only has a viewpoint, needs to be converted to keyboard image the stereo-picture with depth information. For this reason, it may be necessary to synthesize the keyboard image of another viewpoint by calculating from the keyboard image of a viewpoint, Thus obtaining Three-dimensional keyboard image.For example, for keyboard, an available planar rectangular is modeled to it, Calculate its position in three dimensions and attitude.Particularly, key can be asked for according to homograph relation Position in the three-dimensional system of coordinate of single view filming apparatus for the disk and attitude, as single view filming apparatus and people The rotation of two viewpoints of eye and translation parameterss are it is known that the left-eye view of human eye can be projected to respectively by keyboard In right-eye view, thus the binocular image that order is added in virtual field-of-view image has third dimension, formed The visual cues of the actual physical location of keyboard can correctly be passed on.For the object that shape is more complicated, Available segment areal model carries out approximately to object surface shape, then using similar method to its position Estimated with attitude, and regenerated the binocular view of object by projection.
Below so that there is the keyboard image of a viewpoint as a example, illustrate the calculating of the binocular view of keyboard Process.The three-dimensional coordinate (under the local coordinate system that keyboard is located) of the characteristic point on keyboard is known, For example, can pass through to measure in advance, or shoot multiple images of different angles, then using stereoscopic vision Method carries out three-dimensional reconstruction and obtains.Assume the local that the three-dimensional coordinate of the characteristic point on keyboard is located in keyboard It is P under coordinate systemobj, characteristic point on the keyboard coordinate in the coordinate system of filming apparatus is Pcam, keyboard The local coordinate being located is tied to the rotation of coordinate system of filming apparatus and translation is respectively R, t, the left side of user Right eye coordinate system is respectively R with respect to the rotation of the coordinate system of filming apparatus and translationl, tl, Rr, tr, key Subpoint in the image of the capture of the Feature point correspondence on disk is Pimg, the Intrinsic Matrix K of filming apparatus Can be obtained by prior demarcation.
The subpoint constraint that observe can be utilized, solve and obtain R and t:
Pimg=K*Pcam=K* (Pobj* R+t) (2),
Then the projection equation of left-eye image is:
Pleft=K* (Pobj*Rl+tl) (3),
Due to PobjIn one plane, so PimgAnd PleftMeet homograph, therefore, can obtain Transformation matrix H, meets Pleft=H*Pimg, according to transformation matrix H, can be by the keyboard image detecting Icam Be converted to the image I that left eye is seenleft.Accordingly for eye image, can be obtained using same method.
Fig. 9 illustrates the binocular vision obtaining keyboard based on the image of capture according to an exemplary embodiment of the present invention The example of figure.As shown in figure 9, detect keyboard image from the image of capture, then based on the list detecting The keyboard image of viewpoint determines keyboard position in three dimensions and attitude, is then based on keyboard in three-dimensional Position in space and attitude determine the keyboard image of another viewpoint, then based on filming apparatus and user's binocular Between position relationship, to detect single view keyboard image and determine another viewpoint keyboard image Carry out view-point correction to obtain the binocular view of keyboard, thus obtaining the void of the binocular view being superimposed with keyboard Intend field-of-view image.
As another example, if the filming apparatus being used in step S102 are depth camera, or At least two single view photographic head, can based on the position relationship between described filming apparatus and user's binocular, The keyboard image detecting is carried out with the binocular view that view-point correction to obtain keyboard.
When described filming apparatus are at least two single view photographic head, can image by described at least two Head obtains the position that the 3D rendering of keyboard and keyboard are with respect to described filming apparatus, then, can basis Keyboard closes with respect to the position between the position of described filming apparatus and described filming apparatus and user's binocular System, the 3D rendering of keyboard is projected binocular vision in figure.
The 3D of keyboard when described filming apparatus are depth camera, can be obtained by described depth camera Image and keyboard with respect to the position of described depth camera, then, can be according to keyboard with respect to described Position relationship between the position of depth camera and described depth camera and user's binocular, by keyboard 3D rendering project binocular vision in figure.
In step S104, assume the virtual field-of-view image of the binocular view being superimposed with keyboard to user.As Example, can first obtain reflection virtual scene virtual scene image, and by by the binocular view of keyboard with Virtual scene image is overlapped producing the virtual field-of-view image of the binocular view being superimposed with keyboard.Figure 10 Illustrate that generation according to an exemplary embodiment of the present invention is superimposed with the virtual field-of-view image of the binocular view of keyboard Example.As shown in (a) in Figure 10, capture the image around user by filming apparatus, such as scheme Shown in (b) in 10, view-point correction is carried out to the keyboard image detecting from the image of capture and comes To the binocular view of keyboard, shown in (c) in such as Figure 10, obtain the virtual scene of reflection virtual scene Shown in (d) in image, such as Figure 10, the binocular view of keyboard and virtual scene image are overlapped Come to produce virtual field-of-view image and to user output.
In step S105, it is determined whether need to continue to assume keyboard to user.
As an example, detecting in the case that keyboard use terminates it may be determined that needing not continue to user Assume keyboard.Can judge by implementation below whether user terminates keyboard and use.
As an example, when user is detected and not using keyboard to be inputted during predetermined amount of time, Can determine that user terminates keyboard and uses.It should be understood that the input through keyboard situation of sustainable detection user, when When of short duration pause occurs, do not know to terminate keyboard for user to use, only interrupt making when user is detected With input through keyboard and when exceeding predetermined amount of time, just it is defined as user and terminates keyboard using.Here, described Predetermined amount of time can be arranged automatically by head mounted display, also can be arranged by User Defined, for example, institute State predetermined amount of time and may be configured as 5 minutes.
As another example, because user is when using keyboard execution input operation, handss are with a distance from keyboard Will not be too remote, so when the distance between hand user is detected and keyboard exceed predetermined use apart from threshold It may be determined that user terminates keyboard use during value.For example, when detecting between the both hands of user and keyboard Distance all more than first using during distance threshold it may be determined that user terminate keyboard use.In some cases, It is far that one hands of user leave keyboard, and single another handss, also on keyboard, are now also no longer necessary to use Keyboard, therefore, when the distance between one hand user is detected and keyboard use distance threshold more than second When it may be determined that user terminates keyboard.Wherein, first can using distance threshold using distance threshold and second Identical can automatically be arranged by head mounted display, also can be arranged by User Defined it is also possible to different, And, using the distance between both hands and keyboard or singlehanded the distance between with keyboard by the way of measure, Can automatically be arranged by head mounted display, also can be arranged by User Defined.
As another example, when the user input terminating assuming keyboard is detected it may be determined that user terminates Keyboard uses.For example, user presses the input modes such as specific button notice head mounted display and terminates showing Show keyboard.
As another example, when the application running on head mounted display does not currently need using keyboard, Can determine that user terminates keyboard and uses.For example, the application interface of display in virtual field-of-view image is detected In do not need using keyboard execution operation control (for example, in virtual field-of-view image display word defeated Enter dialog box to disappear);Or, detect and need to be turned off using the application of keyboard.
If it should be understood that user is detected to be switched to other application during using input through keyboard, User can be redefined the need of using keyboard according to the concrete condition of the application being switched to, thus really Determine to assume keyboard the need of to user.
In the case that determination needs not continue to assume keyboard to user, execution step S106.In step S106, assumes the virtual field-of-view image of the binocular view not being superimposed with keyboard to user, for example, only to Family assumes virtual scene image.
It should be understood that above-mentioned specific embodiment, it is equally applicable to handle (for example, taking keyboard as a example Carry out the game paddle being used during virtual game using head mounted display).Head mounted display can be detected The function situation of the application of upper operation, when the application running is currently needed for operating using handle, can detect Whether user holds handle, if user is detected to hold handle at present, only assumes virtual field to user Scape image;If user is detected currently without holding handle, user's week can be captured by filming apparatus The image enclosing, and detect handle from the image of capture.
Can detect whether user holds handle by implementation below.As an example, due to general ring Border temperature is below human body temperature, and staff humidity will be typically higher than ambient humidity, therefore can be by detecting handss Temperature around handle and/or humidity are determining whether user holds handle.Here, temperature can be set in the handle Spend sensor and/or humidity sensor to measure temperature and/or the humidity of surrounding.Can be obtained according to measurement The temperature of surrounding and/or humidity compare to determine with preset temperature threshold value and/or default humidity threshold Whether handle is in user's handss.
As another example, can determine whether user holds handle by detecting the motion conditions of handle. Here, motion sensor (for example, gyroscope, inertia accelerometer etc.) can be set in the handle, can pass through Whether the intensity of analysis motion and persistent period determine handle in user's handss.
As another example, because human body is contained within moisture, it is therefore the conductor being capable of conduction, so can Determine whether user holds handle by detecting electric current and/or inductance.Here, can install in handle surfaces Conductive material, after handle surfaces install electrode, can by measuring electrode between size of current estimating Go out resistance, and then determine handle whether in user's handss;Or, can be by measuring the inductance of single electrode To judge whether this electrode is connected with human body.
When being not detected by handle in the image from capture, can point out not having handle around user.At this In the case of kind, user may need to stand up to find handle, therefore can ask the user whether further to stand up Find handle.If the user determine that needing to stand up to find handle, then can assume the true thing of surrounding to user The binocular view of body, facilitates user to be immediately seen the environment of surrounding;If the user determine that not needing to stand up to seek Look for handle, then the changeable no true handle mode of operation entering application.When user find handle and by its Hold when starting with middle, only can assume virtual scene image to user;When user does not find handle and abandons seeking When looking for handle, only can assume virtual scene image to user.
When handle is detected in the image from capture, can determine whether it whether in the true field range of user In (that is, user does not currently wear field range during head mounted display).If true in user Field range, then can assume the virtual field-of-view image of the binocular view being superimposed with handle to user;If no In the field range of user, then user can be pointed out there is no handle within sweep of the eye in current, and one can be entered The orientation that step prompting user's steering tiller is located to make handle enter access customer true within sweep of the eye.For example, User can be pointed out by image, word, audio frequency, video etc..The positioning mode of handle can adopt and key The positioning mode that disk is similar to, for example, the mode such as can detect come position fixing handle by wireless signal.
As an example, can in virtual field-of-view image display the prompt box, can show in prompting frame that handle does not exist Within sweep of the eye, so that user's adjustment visual angle is searching handle, also can be further according to handle and user Relative position, prompts the user on how to adjust visual angle, thus helping user to be quickly found out handle in prompting frame. As another example, can be by auditory tone cueses user's handle not within sweep of the eye, further also can root According to the relative position of handle and user, how visual angle is adjusted by auditory tone cueses user.As another example, Arrow indicator can be shown in virtual field-of-view image, indicate the side that handle is located by the sensing of arrow Position, further, while also can showing arrow indicator in virtual field-of-view image, by virtual Show in field-of-view image word or the angle being rotated by auditory tone cueses user and/or handle and user it Between distance.
By said method, it is easy to user and finds interactive device when wearing head mounted display and using friendship Mutually equipment execution input operation.
Hereinafter, the applied field that will take food when being described in and to wear head mounted display with reference to Figure 11 to Figure 19 Scape.Figure 11 illustrates the binocular view presenting to user and being superimposed with food according to an exemplary embodiment of the present invention Virtual field-of-view image example.As shown in figure 11, assume the binocular view being superimposed with food to user Virtual field-of-view image, thus user can complete the action taken food when using head mounted display.Should Understand although following application scenarios taking food as a example, but be applied equally to other similar objects.
Figure 12 illustrates the side showing food in head mounted display according to an exemplary embodiment of the present invention The flow chart of method.Here, the Overall Steps in methods described can be executed by head mounted display, also may be used Shown by wear-type and execute a part of step, another portion is executed by the processor outside head mounted display Step by step.
As Figure 12 shows, in step S201, determine user the need of feed.
User can be determined the need of feed according to implementation below.As an example, detect pre- It may be determined that user needs to take food in the case of determining button operation.Here, operated button can be head Button on the display screen of the hardware button on head mounted displays or head mounted display, works as inspection Measure user to press in a predefined manner during predetermined key it may be determined that needing to assume food and/or drink to user Product.Operated button can also be virtual key, i.e. in virtual field-of-view image, display one is virtual Interface, by detecting that the interaction scenario of user and this virtual interface is judged.Wherein, described pre- Determine at least one that mode can be among in the following manner:Short press, length is pressed, short by pre-determined number, short press With long by alternately etc..Figure 13 illustrates showing of operated button according to an exemplary embodiment of the present invention Example.As shown in figure 13, operated button can show for the hardware button on head mounted display, wear-type Show the button on the display screen of device, the virtual key in virtual interface.
As another example, in the case of the prearranged gesture of user is detected it may be determined that user need into Food.Described prearranged gesture can be the gesture that the singlehanded gesture completing or both hands complete, described The particular content of prearranged gesture can be following gesture at least one of item:Wave, handss draw circle, handss Draw square, handss draw triangle, gesture of finding a view etc..Figure 14 illustrates according to an exemplary embodiment of the present invention taking The example of scape gesture, as shown in figure 14, can determine needs according to the viewfinder range that gesture of finding a view is irised out The object in viewfinder range presenting to user.Can detect and identify using existing gesture detecting devices Particular content indicated by gesture.
As another example, detect around user post the object of specific label in the case of, can Determine need to user present described in post the object of specific label.Here, institute is in need presents to user Object can unify to post identical specific label, or, the thing that different classes of needs present to user Body can post different classes of specific label in order to be identified to different classes of object, for example, can 1st class label is attached to identify desk on desk, the 2nd class label can be attached to identify chair on chair 3rd class label can be attached to labelling tableware on tableware by son, posts the 3rd when detecting around user It may be determined that user needs to take food during the object of class label patch.Can be detected by various methods and identify spy Calibration is signed.
As another example, it may be determined that user needs in the case of the arrival default meal time is detected Take food.Head mounted display can preset mealtimes automatically, for example, the time that default breakfast starts is 7:30, the time that lunch starts is 12:00, the time that dinner starts is 6:00.Use due to each user The meal time may be different, therefore, also can arrange the meal time according to the custom of oneself by user, for example, The time that user can arrange that breakfast starts is 8:00, the time that lunch starts is 12:30, dinner start when Between be 6:00.When the dining that there is head mounted display default meal time and user preset automatically simultaneously During the time, can be judged by the way of priority, for example, if the meal time of user preset Priority is higher than the priority of head mounted display default meal time automatically, then only can reach user During the default meal time, just can determine that user needs to take food.Additionally, also can head-mounted display from Move the default meal time and the meal time of user preset is all responded.
As another example, the real-world object of surrounding can be identified, determine the species of real-world object, It may be determined that user needs to take food in the case of at least one in food, beverage and tableware is detected. For example, can by image-recognizing method come from capture user around image in detection food, beverage and Tableware etc., additionally, also can be known to the food around user, beverage and tableware by additive method Not.
As another example, food around user, beverage is detected during default meal time section With in the case of at least one in tableware it may be determined that user need take food.Here, use can be pre-set The meal time period, for example, default breakfast time section is 7:00-10:00, lunchtime section is 11:00-14:00, Date for dinner section is 17:00-20:00, (for example, meal time section can be arranged automatically by head mounted display Default setting when dispatching from the factory), also can be by user setup.If head mounted display automatically default dining when Between section and user preset meal time section simultaneously in the presence of, can be judged by the way of priority, For example, if the priority of the meal time section of user preset is higher than head mounted display default use automatically The priority of meal time period, then can only respond in the meal time Duan Shicai reaching user preset, additionally, Also can the meal time section of head-mounted display default meal time section and user preset automatically all carry out Response.Can be during default meal time section, being detected by methods such as image recognitions around user is At least one having in food, beverage and tableware no, to determine user the need of feed.
As another example, the default gesture of user or predetermined is detected during default meal time section It may be determined that user needs to take food in the case of posture.Here, meal time section can be pre-set, for example, Default breakfast time section is 7:00-10:00, lunchtime section is 11:00-14:00, date for dinner section is 17:00-20:00, meal time section can automatically be arranged (acquiescence when for example, dispatching from the factory by head mounted display Setting), also can be by user setup.If default meal time section and user are pre- automatically for head mounted display If meal time section simultaneously in the presence of, can be judged by the way of priority, for example, if use The priority of family default meal time section is higher than the excellent of head mounted display default meal time section automatically First level, then can only respond in the meal time Duan Shicai reaching user preset, additionally, also can be to wear-type The meal time section of display default meal time section and user preset automatically is all responded.Described pre- Determine the gesture that gesture can be that the singlehanded gesture completing or both hands complete, described prearranged gesture Particular content can be at least one among following gesture:Wave, handss draw circle, handss draw square, handss are drawn Triangle, gesture of finding a view etc..Described predetermined gesture can be at least one among following attitude:Rotary head, Body "Left"-deviationist, body Right deviation etc..Can be detected using existing gesture detecting devices or posture detecting devices Particular content with identification gesture or posture.
As another example, it may be determined that using in the case of default telecommand input operation is detected Family needs to take food.Particularly, can be sentenced by detecting the telecommand that user inputs in other equipment Disconnected user is the need of feed.Here, described other equipment may include at least one among following equipment: Mobile communication terminal (for example, smart mobile phone), personal panel computer, personal computer, peripheral hardware keyboard, Wearable device, handle etc..Wearable device may include at least one among following item:Intelligent bracelet, Intelligent watch etc..Described other equipment can be connected with head mounted display by way of wired or wireless, Wherein, wirelessly may include:Bluetooth, ultra broadband, ZigBee (" ZigBee protocol "), Wireless Fidelity (wireless fidelity, Wi-Fi), macro network etc..Additionally, described telecommand can also be infrared finger Order etc..Figure 15 illustrates according to an exemplary embodiment of the present invention to determine use by telecommand input operation Family needs the example of feed.As shown in figure 15, can be determined by input telecommand on smart mobile phone User needs to take food.
As another example, user is determined the need of feed according to the acoustic control operation detecting.Can lead to Cross sound or other audible signals that mike gathers user, by speech recognition technology come identifying user Phonetic order or acoustic control instruction, so that it is determined that user is the need of feed.For example, if user sends sound Control instruction " starting to have a meal ", head mounted display receives this acoustic control instruction and carries out voice to this acoustic control instruction Identification, so that it is determined that this acoustic control instruction is the instruction taken food for determining user to need.Show in wear-type The corresponding relation of the instruction of acoustic control instruction and instruction user needs feed can be prestored in device, for example, can Shown in the form of corresponding table, following acoustic control instruction is needed the instruction of feed corresponding aobvious with user Show:Chinese and English instruction or other sound instructions such as " starting to show food ", " starting to show dining table ".Should Understand, acoustic control instruction is not limited to other acoustic controls instruction that above-mentioned example or user pre-set, As long as user and head mounted display both know about this acoustic control instruction and correspond to determine the sound that user needs feed Control instructs.
In the case that step S201 determines that user needs feed, execution step S202.In step S202, Determine the real-world object needing to present to user.Here, need to the real-world object that user presents can be Food, beverage, tableware, handss, dining table etc..
The object needing to present to user can be determined by implementation below.As an example, wear-type Display can be previously stored with the image of all kinds of article (for example, food), can be by from the image of capture The image of the real-world object detecting is mated with the image of the food prestoring, if the match is successful, Then show to comprise food in the real-world object detecting from the image of capture, thus can determine that from capture The real-world object detecting in image is the object needing to present to user.In some cases, Yong Huke Display real-world object as few as possible can be wished, therefore, in above-mentioned implementation, if it have been determined that The real-world object detecting from the image of capture comprises food, then can divide food with other real-world objects From, only food is defined as the object needing to present to user, true without assuming other to user Object.Further, since hand is critically important for correct crawl with the relative position of food, also can be by each Plant algorithm the hand images in the image of capture are detected, if hand is detected, can be by hand The object presenting to user as needs.
As another example, can label on different classes of object instruction respective classes with to difference The object of classification is identified, and head mounted display can be identified to label.For example, can be by the 1st class Label is attached to identify desk on desk, can be attached to identify chair on chair by the 2nd class label, can be by 3rd class label is attached to labelling tableware on tableware.When the 3rd class label is detected it may be determined that need to Real-world object around user's present user.In some cases, user may want to as few as possible showing Show real-world object, therefore, in above-mentioned implementation, if it have detected that the 3rd class label, then may be used The object posting the 3rd class label is separated with other real-world objects, only will post the object of the 3rd class label It is defined as the object needing to present to user, assume other real-world objects without to user.Additionally, Because hand is critically important for correct crawl with the relative position of food and/or beverage, also can be by various calculations Method detects to the hand images in the image of capture, if hand is detected, can also serve as hand Need the object presenting to user.
As another example, can by detect user determine in some region of prearranged gesture needs or not Need the real-world object presenting to user.Described prearranged gesture can be at least one among following gesture: Wave, handss draw circle, handss draw square, handss draw triangle, gesture of finding a view etc..For example, if use is detected The gesture drawn a circle in a certain region in family, then can determine that and only need to present the object in the range of drawing a circle to user; If the gesture of finding a view of user is detected, can determine that and only need to present the thing that gesture of finding a view is irised out to user Body;If the gesture that user's handss draw triangle is detected, can determine that needs present on dining table to user All objects;If the gesture that user's handss draw square is detected, can determine that and only need to present to user Food;If the gesture that user waves on real-world object is detected, can determine that do not need to user be in Existing described real-world object.
As another example, can be determined according to the phonetic order detecting needs or do not need to user be in Existing real-world object.For example, if phonetic order is detected is " display dining table and food ", can determine that Need to assume all foods on dining table and dining table to user;If phonetic order is detected is " only to show Food ", then can determine that and only need to assume food to user;If phonetic order is detected is " not show meal Table ", then can determine that and do not need to assume dining table to user.
As another example, can determine by way of phonetic order and tag recognition combine needs or Do not need the object presenting to user.For example, the 1st class label can be attached to identify desk on desk, 2nd class label can be attached to identify chair on chair, the 3rd class label can be attached to labelling on tableware Tableware, head mounted display can be identified to above-mentioned label, is " only to show when phonetic order is detected Post the object of the 3rd class label " when it may be determined that only needing to assume food to user;When voice is detected Instruct as " object of the 3rd class label and the 1st class label is posted in display " it may be determined that needing to assuming food Thing and dining table;It it is " not showing the object posting the 1st class label " it may be determined that not when phonetic order is detected Need to assume dining table to user.
As another example, can by detect the telecommand of other equipment determine needs or do not need to The object that user presents.Wherein, described other equipment may include at least one among following item:Can wear Wear equipment, mobile device etc., wherein, wearable device can be at least one among following item:Intelligence Can bracelet, intelligent watch etc..Telecommand may include need the title of object that presents and/or do not need be in The title of existing object.
As another example, virtual mouse can be shown in virtual field-of-view image, by the user detecting Operation to virtual mouse is it may be determined that needing or not needing the object presenting to user.Figure 16 illustrates basis The example determining the object needing to present to user by virtual mouse of exemplary embodiment of the present.As Shown in Figure 16, the operable virtual mouse of user chooses certain a few article in virtual field-of-view image, in detection Choosing after operation it may be determined that the article that virtual mouse is chosen need to present to user to virtual mouse Object.
In step S203, capture the image around user by filming apparatus, examine from the image of capture Survey the image of real-world object needing to present to user, and the image based on the real-world object detecting come Binocular view to real-world object.
Here, described filming apparatus can be single single view photographic head or binocular camera, Depth camera is first-class.When filming apparatus are binocular camera or depth camera, from the image of capture When detection needs the image of real-world object presenting to user, not only can be presented to user using needs The characteristics of image of real-world object is detecting it is also possible to using needing the depth of real-world object that presents to user Information come to determine its be located image-region.Additionally, also hand images can be detected from the image of capture, To provide a user with more complete visual feedback.
In step S204, assume the virtual field-of-view image of the binocular view being superimposed with real-world object to user.
Here, the virtual scene image of reflection virtual scene can be obtained, and pass through the binocular of real-world object View and virtual scene image are overlapped producing the virtual visual field of the binocular view being superimposed with real-world object Image.
Because the dummy object in virtual scene image and real-world object there may be screening on three dimensions Gear, blocks interference mutually to reduce, can show real-world object in the following manner:
As an example, the binocular image of real-world object can be shown in translucent mode.Can be according in void Intend in field-of-view image the content type of application interface of display and/or to judge with the interaction scenario of user be No show real-world object in translucent fashion.For example, user carries out virtual game by head mounted display When, role movement of playing in the interface of virtual game is detected frequently and it needs to substantial amounts of user mutual is defeated Enter, real-world object can be shown in translucent fashion.When application display in virtual field-of-view image is detected Interface is virtual theater, or when the input of user controls frequency to reduce, can terminate to show in translucent fashion Show real-world object.Similarly, real-world object can be shown as contour line or 3D grid lines.
As another example, the dummy object in virtual scene image can be zoomed in and out and/or mobile, from And the problem blocked with real-world object can be prevented effectively from three dimensions.For example, can show in wear-type When device runs virtual theater, the virtual screen of display in virtual field-of-view image is reduced and moved, with Avoid overlapping with real-world object.
Figure 17 illustrates the example of display real-world object according to an exemplary embodiment of the present invention.In fig. 17 (a) illustrate for real-world object to be shown as contour line, (b) in Figure 17 illustrates to virtual scene image In dummy object zoom in and out and move.
Additionally, when the dummy object in virtual scene image and real-world object have screening on three dimensions During gear, the priority of display can be judged.Predeterminable display priority list, in described list To the display priority order of the dummy object in virtual image and real-world object according to its importance and urgent Degree is ranked up.This display priority list can be arranged automatically by head mounted display, also can be by User is configured according to personal use habit.Can be determined by searching priority in described list Surely adopt which kind of display mode, and switch between different display modes.
In step S205, the operation according to user is added in virtual field-of-view image to add and/or to delete Real-world object.Real-world object around user is excessive, may be superimposed excessive in virtual field-of-view image Real-world object and affect the effect that user watches virtual scene image, therefore user may be selected to regard virtual Assume which real-world object in wild image, or removed which real-world object.
Figure 18 illustrates that deletion according to an exemplary embodiment of the present invention is added to the true of virtual field-of-view image Object.As shown in figure 18, display in virtual field-of-view image can be removed by the gesture operation waved Real-world object.
In step S206, it is determined whether need to continue to assume real-world object to user.
As an example, in the case of feed being completed user is detected, determine and need not continue to user Assume real-world object.Can detect by implementation below whether user completes to take food:
As an example, it may be determined that user completes to take food in the case of predetermined key operation is detected.Its In, operated button can be hardware button on head mounted display or wear-type shows Device display screen on button, when user is detected and pressing predetermined key in a predefined manner it may be determined that User completes to take food.Operated button can also be virtual key, i.e. folded in virtual field-of-view image Plus a virtual interface, by detecting that the interaction scenario of user and this virtual interface is judged.Its In, described predetermined way can be at least one among in the following manner:Short press, length is pressed, short by predetermined Number of times, short by and long by alternately etc..
As another example, it may be determined that user has taken food in the case of the prearranged gesture of user is detected Become.Described prearranged gesture can be the gesture that the singlehanded gesture completing or both hands complete, described The particular content of prearranged gesture can be following gesture at least one of item:Wave, handss draw circle, handss Draw square, handss draw triangle, gesture of finding a view etc..Can detect and know using existing gesture detecting devices Particular content indicated by other gesture.
As another example, can by the label detecting and identify on object determine user whether complete into Food.For example, can label on different classes of object instruction respective classes with to different classes of thing Body is identified, and head mounted display can be identified to label.For example, the 1st class label can be attached to To identify desk on desk, the 2nd class label can be attached to identify chair on chair, can be by the 3rd category Label are attached to labelling tableware on tableware.When can't detect the 3rd class label it may be determined that user completes to take food.
As another example, can be determined by detecting the telecommand of input in other equipment for the user Whether user completes to take food.Here, described other equipment can be at least one among following equipment: Mobile communication terminal, panel computer, personal computer, peripheral hardware keyboard, Wearable device, handle etc..
As another example, use can be determined by the phonetic order of identifying user or specific audible signal Whether family completes to take food.
When needing not continue to assume real-world object to user in the determination of step S206, execution step S207. In step S207, assume the virtual field-of-view image of the binocular view not being superimposed with real-world object to user, that is, Terminate assuming the binocular view of real-world object to user, only assume virtual scene image to user.
It should be understood that the specific embodiment in above-mentioned application scenarios taking take food as a example, it is equally applicable to The application scenarios drinking water.When user drinks water, the action drunk water is by several continuously linking groups of action by a small margin Become:Crawl cup, cup is moved on to mouth, bows and drink water, cup is put back to.Compared to general scene, The visual field requiring during drinking-water is wider, needs to show the relative position of water tumbler and other objects of periphery, for example, Relative position with desk.Therefore, according to mode shown below, the binocular view of real-world object can be superimposed The virtual field-of-view image showing to wear-type:The binocular view of real-world object is shown in a form of picture-in-picture On virtual scene image (that is, the real-world object reducing in certain position display of virtual scene image Binocular view), only show that the binocular view of real-world object does not show virtual scene image (that is, virtual The binocular view of real-world object is only shown, user seems that transmitted through glasses are true come to look around in field-of-view image Outdoor scene as), virtual scene image is shown on the binocular view of real-world object (i.e., in a form of picture-in-picture The virtual scene image reducing in certain position display of the binocular view of real-world object), by real-world object Binocular view and virtual scene image spatially merge display (for example, with half on virtual scene image Transparent mode shows the binocular view of real-world object).Figure 19 illustrates according to an exemplary embodiment of the present Display real-world object binocular view example.(a) in Figure 19 illustrates the binocular of real-world object View is shown on virtual scene image in a form of picture-in-picture, and (b) in Figure 19 illustrates only to show very The binocular view of real object and do not show virtual scene image, (c) in Figure 19 illustrates virtual scene Image is shown on the binocular view of real-world object in a form of picture-in-picture, and (d) in Figure 19 is shown in The binocular view of real-world object is shown in translucent mode on virtual scene image.
Hereinafter, will be described in conjunction with Figure 20 when wearing head mounted display and avoid touching with real-world object The application scenarios hitting.In order to avoid, when using head mounted display, the body part of user is close to true The body part of object or user is moved towards real-world object and causes to be collided (for example, with real-world object During the virtual somatic sensation television games such as user is being boxed using head mounted display, golf, may Collide object or the object close to user of surrounding), the body being superimposed with close to user can be assumed to user The binocular view of object virtual field-of-view image to point out user.
Figure 20 illustrate according to an exemplary embodiment of the present invention in head mounted display display it may happen that The flow chart of the method for object of collision.Here, the Overall Steps in methods described can be shown by wear-type Device, to execute, also can be shown by wear-type and execute a part of step, by the process outside head mounted display Device is executing another part step.
As shown in figure 20, in step S301, it is determined whether need to assume the real-world object of surrounding to user, That is, determine and around user, whether there is the object it may happen that colliding.Can be examined by implementation below Survey around user with the presence or absence of the object that may collide with user.
As an example, can be by filming apparatus (for example, wide-angle camera, the depth on head mounted display Degree shooting is first-class), and/or other filming apparatus independent of head mounted display and/or sensor, obtain The position of the 3D scene information around user and user and action.When detecting around user distance When object is excessively near, (for example, distance is less than risk distance threshold value) is it may be determined that need to assume surrounding to user Object.
As another example, position and user's body of user can be determined by filming apparatus and sensor The movement tendency at each position of body, and the movement tendency of the position according to user and user determines that user whether may be used The real-world object of the surrounding that can be able to touch, when determining the real-world object that user may touch surrounding, Can determine that the object that may touch needing to assume surrounding to user;When determination user there is no fear of touching Contact surrounding real-world object when it may be determined that not needing to assume the real-world object of surrounding to user.
As another example, the position of user, user's body can be determined by filming apparatus and sensor The position of the object around the movement tendency at each position, user and movement tendency, and the position according to user, The position of object around the movement tendency at each position of user's body, user and movement tendency determine that user is The real-world object of the no surrounding that may touch, when determination user may touch the true thing of surrounding It may be determined that needing to assume the object that may touch of surrounding to user during body;When determination user can not Energy can touch during the real-world object of surrounding it may be determined that not needing to assume the real-world object of surrounding to user.
In step S302, capture the image around user by filming apparatus, and examine from the image of capture Survey it may happen that collision object image.Can be to it may happen that the shape of the object colliding specifically be known , for example, not may recognize that it may happen that the contour line of the object colliding, 3D grid lines, 3D model etc. Information.
In step S303, based on the position relationship between described filming apparatus and user's binocular, to detecting It may happen that the image of the object of collision carry out view-point correction come to obtain it may happen that the object that collides double Eye diagram.
In step S304, assume the virtual of the binocular view being superimposed the object being likely to occur collision to user Field-of-view image.
As an example, can be by it may happen that the object of collision be shown as translucent, contour line or 3D grid At least one among line.By it may happen that the object of collision is shown as contour line i.e. only it may happen that touching The marginal position outline on of the object hitting, to reduce the impact to virtual scene image.
In addition it is also possible to the mode such as word, identification image, audio frequency, video come to point out around user exist It may happen that collision object, for example, can in virtual field-of-view image display reminding information (for example, with The mode of word and/or figure shows) come to point out user with it may happen that collision object distance.
In step S305, it is determined whether need the real-world object continuing to assume surrounding to user.
As an example, when detect user away from display it may happen that collision object when it may be determined that Do not need to display to the user that described object again;When detecting it may happen that when the object that collides is away from user, Can determine that and do not need to display to the user that described object again;When the cancellation receiving user shows real-world object It may be determined that not needing to assume real-world object to user during instruction, wherein, described instruction can for following item it In at least one:Phonetic order, key command, virtual mouse instruction, the finger from wearable device Order etc..
When needing not continue to, in the determination of step S305, the real-world object assuming surrounding to user, execution step S306.In step S306, assume the virtual cyclogram of the binocular view not being superimposed with real-world object to user Picture, that is, terminate assuming the binocular view of real-world object to user, only assume virtual scene image to user.
It is also possible to provide risk distance threshold value, and it is directed to different risk distances and danger classes institute The different display modes using, thus can be pointed out using the mode being suitable for according to the dangerous situation detecting User.
Hereinafter, will be described in conjunction with Figure 21 to Figure 25 and show in head mounted display and set with regard to Internet of Things Standby display items purpose application scenarios.Thus user can still be able to know when using head mounted display The working condition of internet of things equipment of surrounding and other relevant informations.
Figure 21 illustrates according to an exemplary embodiment of the present invention to show with regard to Internet of Things in head mounted display The flow chart of the display items purpose method of net equipment.Here, the Overall Steps in methods described can be by wearing Formula display, to execute, also can be shown by wear-type and execute a part of step, outside head mounted display Processor executing another part step.
As shown in figure 21, in step S401, obtain the display project with regard to internet of things equipment.
Here, the following item of described display project expression thing networked devices at least one of:Operation circle Face, mode of operation, announcement information, configured information.
As an example, the true field range of user can be monitored in real time, when the true of user is detected When having internet of things equipment to occur in the real visual field, can be obtained it and shows accordingly according to the type of internet of things equipment Aspect mesh.For example, can be according to Inertial Measurement Unit (the Inertial Measurement of head mounted display Unit, IMU) measured by information and the facility map in user place room come the true visual field to user Scope is monitored in real time.Also can be carried out by the visual field of the filming apparatus of installation on head-mounted display Analysis obtains.
For example, when detect monitoring camera be located at user the true visual field in when, monitoring camera can be obtained The monitored picture that head photographs.When detect air-conditioning be located at user the true visual field in when, can by communication Obtain the parameters such as temperature, the humidity of air-conditioning.When clock is located in the true visual field of user upon this detection, can Time by the Image Acquisition clock display of capture.When cooking appliance (for example, baking box) position is detected When in the true visual field of user, the working conditions such as temperature can be obtained.When mobile communication terminal (example is detected As smart mobile phone) be located at user the true visual field in when, the operation interface of mobile communication terminal can be obtained.
Additionally, also can receive with regard to internet of things equipment from the internet of things equipment positioned at the true visual field of user Display project.For example, when there being guest to reach gate, the intelligent doorbell at gate can show to wear-type Device sends and notifies, and head mounted display can receive the image on doorway.For example, head mounted display can It is in communication with each other with the mobile communication terminal of user, need disappearing of user response when mobile communication terminal receives During breath, the operation interface of mobile communication terminal can be sent to head mounted display.
In step S402, the display project of acquisition is added to virtual field-of-view image.Here, can be in superposition There are the display items that internet of things equipment is superimposed further on the virtual field-of-view image of binocular view of real-world object Mesh.However, it should be understood that the present invention is not limited to this, virtual field-of-view image may not include any true The binocular view of object, or even may not include any virtual scene image, and only show internet of things equipment Display project.
In step S403, present to user and be superimposed with the virtual field-of-view image of display items purpose.Here, can press To assume the display project of Internet of Things according to any suitable layout, so that user can be real with internet of things equipment Now good interaction is it is preferable that can also take into account between user and virtual scene and real-world object simultaneously Interaction.
Figure 22 illustrates according to an exemplary embodiment of the present invention to assume to user that to be superimposed with display items purpose empty Intend the example of field-of-view image.As shown in figure 22, can show in the virtual field-of-view image of head mounted display The temperature of air-conditioning, humidity, the time of clock display, the working condition of baking box, the behaviour of mobile communication terminal Make interface, and the image that access control equipment is captured, additionally, may also display arrow instruction image, to carry Show that user has completed the orientation with respect to user for the baking box of the cooking.
In step S404, the operation of display items purpose is directed to according to user and carrys out the execution of remote control internet of things equipment accordingly Process.
For example, if internet of things equipment is mobile communication terminal, head mounted display can with mobile communication eventually End is in communication with each other, when mobile communication terminal receives the message needing user response, can be by mobile communication The operation interface of terminal is shown in virtual field-of-view image, and user can be carried out accordingly by head mounted display Operation, for example, remote-controlled mobile communication terminal is dialed and is received calls.Additionally, when user complete right The remote control of mobile communication terminal and/or check, can terminate in virtual cyclogram by detecting the input of user The display project with regard to mobile communication terminal is shown, the input mode of user refers to aforementioned input side in picture Formula, repeats no more.Figure 23 illustrate according to an exemplary embodiment of the present invention present to user be superimposed with movement The example of the virtual field-of-view image of the operation interface of communication terminal.As shown in figure 23, can show in wear-type The operation interface of mobile communication terminal is shown, so that user knows movement in time in the virtual field-of-view image of device Message and/or remote-controlled movement communication terminal that communication terminal receives.
For example, user is currently in use head mounted display and carries out virtual game, and mobile communication terminal has incoming call And mobile communication terminal is outside the true visual field of user, now, head mounted display can receive mobile communication The incoming information of terminal transmission is simultaneously shown in virtual field-of-view image, thus user does not need to stop trip Play, also will not miss important incoming call.If the user determine that needing incoming call answering, can be shown by wear-type The direct incoming call answering of device (for example, head mounted display being answered as bluetooth earphone), additionally, Configured information can be displayed to the user that in virtual field-of-view image (for example, by arrow indicator, word etc. Mode) with instruction user mobile communication terminal with respect to user orientation.If the user determine that not answering Electricity, directly can be hung up the telephone by head mounted display, or, remote-controlled movement communication terminal is hung up the telephone, Additionally, user also can not carry out any operation.If the user desired that cross to wire back, then can show in wear-type Show that the task of arranging telegram in reply in device or the setting of remote-controlled movement communication terminal are wired back and reminded.
Figure 24 illustrate according to an exemplary embodiment of the present invention present to user be superimposed with mobile communication terminal The virtual field-of-view image of incoming information example.As shown in figure 24, arrive when mobile communication terminal receives The incoming information of mobile communication terminal when electric, can be shown in the virtual field-of-view image of head mounted display, So that user knows the incoming information of mobile communication terminal in time, and, may also display arrow instruction image, To point out the orientation that user's mobile communication terminal is with respect to user, additionally, user also can show in wear-type The task of telegram in reply is set in device or the setting of remote-controlled movement communication terminal is wired back and reminded.
For example, mobile communication terminal receives short message, and now, head mounted display can receive mobile logical The short message of letter terminal transmission is simultaneously shown in virtual field-of-view image.If the user desired that return information, Information can be edited in head mounted display, and remote-controlled movement communication terminal is replied, additionally, also may be used Display to the user that configured information (for example, by modes such as arrow indicator, words) is moved with instruction user Mobile communication terminal is with respect to the orientation of user.If the user desired that crossing breath of writing in reply, then can show in wear-type Show that the task of arranging return information in device or remote-controlled movement communication terminal arrange the prompting of return information. If the user desired that carrying out telephonic communication with addresser, head mounted display can be passed through according to the operation of user Call (for example, head mounted display being dialed) as bluetooth earphone, additionally, also can to Family display configured information (for example, by modes such as arrow indicator, words) is mobile logical with instruction user Letter terminal is with respect to the orientation of user.
Figure 25 illustrate according to an exemplary embodiment of the present invention present to user be superimposed with mobile communication terminal The example of the virtual field-of-view image of the short message receiving.As shown in figure 25, when mobile communication terminal receives During to short message, head mounted display can show what mobile communication terminal received in virtual field-of-view image Short message, so that user knows the short message that mobile communication terminal receives in time, and, may also display Arrow indicates image, to point out the orientation that user's mobile communication terminal is with respect to user, additionally, user is also Addresser can be directly called by head mounted display.
Additionally, user can be arranged needing to show display items purpose internet of things equipment.Can be selected using list Mode, lists the ID of all internet of things equipments, and user can choose or cancel and choose internet of things equipment.And Can be arranged in detail for each internet of things equipment, the message that each internet of things equipment can send can be listed Type, can be arranged the need of display for each type of message, and the mode of display.
Additionally, whether the application running with regard to head mounted display can be disturbed, multiple ranks can be set, From any message (for example, virtual theater application) can be accepted, to being not intended to be disturbed (fierce antagonism Real-time virtual online game).In high level application, the as far as possible little prompting mode (example of impact can be taken As the bright spot flashed);In low-level application, the full content of message can be shown.
Hereinafter, display according to an exemplary embodiment of the present invention will be described true in conjunction with Figure 26 to Figure 30 The head mounted display of object.The wear-type of display real-world object according to an exemplary embodiment of the present invention shows Show that the device included by device can be realized in conjunction with special device (for example, senser element), as an example, Described device can be realized by the common hardware processor such as digital signal processor, field programmable gate array, Also can be realized by dedicated hardware processors such as special chips, also can completely by computer program come with Software mode is realized, for example, be implemented as being arranged in head mounted display for showing real-world object Application in module, or be implemented as in the operating system of head mounted display realize function program. The head that partly or entirely can be integrated in display real-world object additionally, alternately, in described device In head mounted displays, the invention is not limited in this regard.
Figure 26 illustrates the head mounted display of display real-world object according to an exemplary embodiment of the present invention Block diagram.As shown in figure 26, the wear-type of display real-world object according to an exemplary embodiment of the present invention shows Device includes:Real-world object view acquisition device 10 and display device 20.
Particularly, real-world object view acquisition device 10 is used for obtaining the real-world object being located at around user Binocular view.
Here, binocular view is the binocular view for the human eye of user wearing head mounted display, leads to Cross the binocular view of real-world object, user's brain can obtain the depth information of real-world object, and then experience Actual three-dimensional space position and three-dimensional attitude to real-world object, i.e. the binocular by real-world object for the user The three-dimensional space position of the real-world object that view is experienced and three-dimensional attitude pass through eye-observation simultaneously with user The three-dimensional space position of the real-world object experienced is consistent with three-dimensional attitude.
As an example, real-world object can be according to thingness or presenting of being pre-set using scene Object, it may include at least one among following item:Object, labeled close to the body of user Object that the application running on object that object, user specify, head mounted display is currently needed for using, Object needed for operational controls.
Display device 20 is used for assuming the virtual cyclogram of the binocular view being superimposed with real-world object to user Picture.By above-mentioned head mounted display, user can watch real-world object in virtual field-of-view image Binocular view (that is, strengthens virtual reality), experiences actual three-dimensional space position and the solid of real-world object Attitude, judges the position relationship that real-world object is with itself and the three-dimensional attitude of real-world object exactly, complete Become the necessary behavior needing visual pattern feedback.
Here it should be appreciated that by the display device being integrated on head mounted display to user can be in Now be superimposed with the virtual field-of-view image of the binocular view of real-world object, also can by with head mounted display outside Other display devices connecing to present to user, the invention is not limited in this regard.
Figure 27 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device.As shown in figure 27, display real-world object in accordance with an alternative illustrative embodiment of the present invention Head mounted display remove include real-world object view acquisition device 10 and the display device 20 shown in Figure 26 Outside, may also include image capture apparatus 30 and binocular view generator 40.
Particularly, image capture apparatus 30 are used for capturing including around user by filming apparatus Real-world object image.
The Image Acquisition that binocular view generator 40 is used for according to capture is located at the true thing around user The binocular view of body.
As an example, image capture apparatus 30 can be captured by single filming apparatus including positioned at user's week The image of the real-world object enclosing, binocular view generator 40 can be located at user according to the Image Acquisition of capture The binocular view of the real-world object of surrounding.
Here, described single filming apparatus can be the common filming apparatus only with a visual angle, due to The image that image capture apparatus 30 are captured does not have depth information, and therefore, correspondingly, binocular view produces Generating apparatus 40 can detect real-world object image from the image of capture, based on the real-world object image detecting Determine the real-world object image of another viewpoint, and based on the real-world object image detecting and described another regard The binocular view to obtain real-world object for the real-world object image of point.
Here, real-world object image is the image of real-world object region in captured image.For example, Binocular view generator 40 can be using existing various image-recognizing methods come from the image being captured Detect real-world object image.
As an example, binocular view generator 40 can based on described single filming apparatus and user's binocular it Between position relationship, the real-world object image of the real-world object image detecting and described another viewpoint is entered Row view-point correction is obtaining the binocular view of real-world object.
As another example, binocular view generator 40 can be obtained truly based on the stereo-picture shooting The binocular view of object.Particularly, image capture apparatus 30 can be captured including position by filming apparatus The image of the real-world object around user, binocular view generator 40 can detect from the image of capture Real-world object image, obtains the binocular view of real-world object based on the real-world object image detecting, its In, described filming apparatus include depth camera, or, described filming apparatus include at least two haplopias Point photographic head.Here, described at least two single view photographic head can have the visual angle of coincidence, thus passing through Depth camera, or at least two single view photographic head can capture the stereo-picture with depth information.
As an example, binocular view generator 40 can be based between described filming apparatus and user's binocular Position relationship, carries out, to the real-world object image detecting, the binocular vision that view-point correction to obtain real-world object Figure.
It should be understood that as an example, above-mentioned single filming apparatus, depth camera or single view photographic head Can be the built in camera of head mounted display it is also possible to be mounted in attached on head mounted display Connect filming apparatus, for example, it may be the photographic head that miscellaneous equipment (for example, smart mobile phone) has, this Invention is not restricted to this.
Preferably, the desired display object among real-world object is can't detect in binocular view generator 40 Real-world object image when, image capture apparatus 30 can expand shooting visual angle to be carried out recapture to include expectation aobvious Show the image of object.
Or, desired display object among real-world object is can't detect in binocular view generator 40 During real-world object image, image capture apparatus 30 can point out user to turn to the orientation that desired display object is located Carry out the image that recapture includes desired display object.For example, image, word, audio frequency can be passed through, regard Frequency etc. is pointing out user.As an example, image capture apparatus 30 can be by based on the real-world object prestoring Three-dimensional space position or to point out user via the three-dimensional space position of real-world object that positioner obtains The image that recapture includes desired display object is carried out in the orientation turning to desired display object place.
Figure 28 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device.As shown in figure 28, display real-world object in accordance with an alternative illustrative embodiment of the present invention Head mounted display remove include real-world object view acquisition device 10 and the display device 20 shown in Figure 26 Outside, may also include virtual field-of-view image generator 50.
Particularly, virtual field-of-view image generator 50 is used for producing the binocular vision being superimposed with real-world object The virtual field-of-view image of figure.
As an example, the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show that device may also include:Virtual scene image acquiring device (not shown).
Virtual scene image acquiring device is used for obtaining the virtual scene image of reflection virtual scene, wherein, Virtual field-of-view image generator 50 is by carrying out the binocular view of real-world object and virtual scene image It is superimposed and to produce the virtual field-of-view image of the binocular view being superimposed with real-world object.That is, present to user Virtual field-of-view image is spatially merged the binocular view of real-world object and virtual scene image virtual Field-of-view image, thus user can be under the virtual scene of normal head mounted display be experienced, completing must The behavior interacting with real-world object needing visual pattern feedback wanted.
Here, the corresponding needs of the virtual scene image i.e. application that run current with head mounted display with The image of the reflection virtual scene presenting to user in the virtual visual field at family.For example, if wear-type shows The current application running of device is the virtual somatic sensation television game such as boxing, golf, then virtual scene image is needs The image of the reflection virtual game scene to present to user in the virtual visual field;If head mounted display The application of current operation is the application for watching film, then virtual scene image is needs to regard virtual The image of the reflection virtual theater screen scene that Yezhong presents to user.
As an example, virtual field-of-view image generator 50 can be according to one of mode shown below by true thing The binocular view of body is added to the virtual field-of-view image of head mounted display:Only show the binocular of real-world object View and do not show virtual scene image, only show that virtual scene image does not show the binocular of real-world object View, the binocular view of real-world object and virtual scene image are spatially merged display, by true thing The binocular view of body is shown on virtual scene image in a form of picture-in-picture, by virtual scene image to draw The form of middle picture is shown on the binocular view of real-world object.
As an example, real-world object can be shown as translucent, profile by virtual field-of-view image generator 50 At least one among line or 3D grid lines.For example, virtual field-of-view image generator 50 can be virtual In the case that dummy object in scene image and real-world object exist on three dimensions and block, will be truly Object is shown as at least one among translucent, contour line or 3D grid lines, to reduce to virtual field Blocking of dummy object in scape image, reduces the impact to viewing virtual scene image.
Additionally, as an example, the dummy object in virtual scene image is with real-world object in three dimensions In the case that upper presence is blocked, virtual field-of-view image generator 50 also dummy object can be zoomed in and out and / or mobile.For example, virtual field-of-view image generator 50 can in virtual scene image with true thing There is the dummy object blocking and zoom in and out and/or mobile in body on three dimensions, also can block in presence In the case of, all dummy objects in virtual scene image are zoomed in and out and/or mobile.It should be understood that Virtual field-of-view image generator 50 can dummy object in automatic decision virtual scene image and true thing Circumstance of occlusion on three dimensions for the body, and correspondingly dummy object is zoomed in and out and/or mobile.Additionally, Virtual field-of-view image generator 50 also dependent on user operation dummy object is zoomed in and out and/or Mobile.
Moreover it is preferred that virtual field-of-view image generator 50 can be added according to the operation of user and/ Or delete the real-world object of the virtual field-of-view image that is added to.That is, user will can not exist according to the demand of oneself In virtual field-of-view image, the binocular view of the real-world object of display is also added in virtual field-of-view image, and/or Delete the binocular view of real-world object that is unnecessary, not needing to present from virtual field-of-view image, to reduce Impact to viewing virtual scene image.
Preferably, the binocular view being superimposed with real-world object can just be assumed in appropriate circumstances to user Virtual field-of-view image, and can terminate presenting the binocular being superimposed with real-world object in appropriate circumstances to user The virtual field-of-view image of view.Figure 29 illustrates that display in accordance with an alternative illustrative embodiment of the present invention is true The block diagram of the head mounted display of object.As shown in figure 29, in accordance with an alternative illustrative embodiment of the present invention Display real-world object head mounted display remove include the real-world object view acquisition device shown in Figure 26 10 and display device 20 outside, may also include display control unit 60.
Particularly, display control unit 60 is used to determine whether to need to true around user's present user Real object, wherein, determining in display control unit 60 needs to the real-world object around user's present user In the case of, real-world object view acquisition device 10 obtains the binocular vision of the real-world object being located at around user Figure and/or display device 20 assume the virtual field-of-view image of the binocular view being superimposed with real-world object to user.
As an example, display control unit 60 detect need handed over the real-world object around user It may be determined that needing to the real-world object around user's present user during mutual scene.For example, it is desired to with The scene that real-world object around family interacts may include following item at least one of:Need by Real-world object executes the scene of input operation (for example, it is desired to execute using keyboard, mouse, handle etc. The scene of input operation), need avoid with real-world object collision scene (for example, it is desired to hide close The scene of people), need to clutch the scene (for example, it is desired to the scene eaten or drink water) of real-world object.
According to the exemplary embodiment of the present invention, can be determined according to various situations present real-world object when Machine, as an example, when at least one among following condition is satisfied, display control unit 60 can be true Fixed needs are to the real-world object around user's present user:Receive the user asking to present real-world object defeated Enter;Determine that the real-world object around user is matched with the default object that presents;In virtual field-of-view image The control needing using real-world object execution operation is detected in the application interface of display;Detect user's Body part is close to real-world object;Body part user is detected moves towards real-world object;Determine head The application running on head mounted displays is currently needed for using real-world object;Determine and reach default and user's week The time that the real-world object enclosing interacts.
Can determine whether user sees the real-world object of surrounding in which moment needs by display control unit 60, Need to assume the opportunity of the binocular view of real-world object of surrounding to user, be easy to user and know week in time The actual three-dimensional space position of the real-world object enclosing and three-dimensional attitude.
As an example, display control unit 60 may further determine that the need of continuation to around user's present user Real-world object, wherein, display control unit 60 determine need not continue to around user's present user Real-world object in the case of, display device 20 can assume the binocular vision not being superimposed with real-world object to user The virtual field-of-view image of figure.
As an example, display control unit 60 detect need handed over the real-world object around user It may be determined that needing not continue to the real-world object around user's present user at the end of mutual scene.
As an example, when at least one among following condition is satisfied, display control unit 60 can be true Determine not needing to assume real-world object to user:Receive the user input terminating assuming real-world object;Determine Real-world object around user is mismatched with the default object that presents;In virtual field-of-view image, display should With being not detected by needing the control using real-world object execution operation in interface;Determine the body of user Position is away from real-world object;Determine that the application running on head mounted display does not currently need to use real-world object; Determine that user does not execute operation using real-world object during preset time period;Determine that user can be not Operation is executed in the case of viewing real-world object.
Assume real-world object with regard to request and/or terminate assuming the user input of real-world object, as an example, Can be realized by least one among following item:Contact action, physical button operation, telecommand Input operation, acoustic control operation, gesture motion, headwork, body action, sight line action, touching are dynamic Work, gripping action.
Assume object with regard to default, can be the thing that the needs of default setting in head mounted display present The object that the body or user needs according to set by the demand of oneself present.For example, default Assuming object can be food, tableware, the handss of user, the object posting specific label, people etc..
Feelings to the real-world object around user's present user can not needed by display control unit 60 Under condition, present to user the binocular view not being superimposed with real-world object virtual field-of-view image (that is, only to User assumes virtual scene image), not affect user's viewing virtual scene image.
Figure 30 illustrates that the wear-type of display real-world object in accordance with an alternative illustrative embodiment of the present invention shows Show the block diagram of device.As shown in figure 30, display real-world object in accordance with an alternative illustrative embodiment of the present invention Head mounted display remove include real-world object view acquisition device 10 and the display device 20 shown in Figure 26 Outside, may also include display project acquisition device 70.
Display project acquisition device 70 be used for obtain with regard to internet of things equipment display project, and by obtain Display project is added to virtual field-of-view image, wherein, described display project expression thing networked devices following At least one of item:Operation interface, mode of operation, announcement information, configured information.
Here, announcement information can be the information such as word, audio frequency, video, image.For example, if thing Networked devices are communication equipments, then notification message can be the Word message with regard to missed call;If thing Networked devices are access control equipments, then notification message can be captured monitoring image.
Configured information is indicated for user and finds the word of internet of things equipment, audio frequency, video, image etc. Information.For example, configured information can be arrow indicator, and user can obtain Internet of Things according to the sensing of arrow Net equipment is with respect to the orientation of user;Configured information may also mean that shows that user is relative with internet of things equipment The word (for example, communication equipment is at your two meters of left front) of position.
As an example, display project acquisition device 70 can be obtained by least one among following process Display project with regard to internet of things equipment:The figure of the internet of things equipment that capture is located in the true visual field of user Picture, and from capture internet of things equipment image zooming-out with regard to internet of things equipment display project;From being located at Internet of things equipment in the true visual field of user and/or outside the true visual field receives aobvious with regard to internet of things equipment Aspect mesh;Sensing is located at the position that the internet of things equipment outside the true visual field of user is with respect to head mounted display Put as configured information.
Additionally, the head mounted display of display real-world object in accordance with an alternative illustrative embodiment of the present invention May also include:Actuation means (not shown).
Actuation means are used for carrying out the execution of remote control internet of things equipment accordingly according to user for the operation of display items purpose Process.
By above-mentioned head mounted display, user can know the thing of surrounding when using head mounted display The relevant information of networked devices, additionally, also remote-controlled internet of things equipment execution is corresponding process.
It should be understood that the head mounted display of display real-world object according to an exemplary embodiment of the present invention can To the corresponding process of concrete application scene execution according to described by Fig. 5 to Figure 25, repeat no more here.
The method showing real-world object in head mounted display according to an exemplary embodiment of the present invention and its Head mounted display, can assume the void of the binocular view of the real-world object being superimposed with around user to user Intend field-of-view image, thus user still is able to when wearing head mounted display perceive the real-world object of surrounding Actual three-dimensional space position, the three-dimensional information such as attitude and association attributes, be easy to the true of user and surrounding Object interacts, and completes the necessary behavior needing visual pattern feedback.Additionally, the present invention is implemented Example can interpolate that needs display to the user that the opportunity of the binocular view of the real-world object of surrounding;Moreover it is possible to The binocular vision of enough real-world objects displaying to the user that surrounding by way of being suitable in virtual field-of-view image Figure.Thus user is obtained in that enhanced virtual visual field experience.
Although having show and described some exemplary embodiments of the present invention, those skilled in the art should This understanding, in the principle without departing from the present invention being limited its scope by claim and its equivalent and spirit In the case of, these embodiments can be modified.

Claims (20)

1. a kind of method showing real-world object in head mounted display, including:
(A) obtain the binocular view of the real-world object being located at around user;And
(B) the virtual field-of-view image of the binocular view being superimposed with real-world object is assumed to user.
2. method according to claim 1, wherein, needs around to user's present user determining Real-world object in the case of, execution step (A) and/or step (B).
3. method according to claim 1 and 2, wherein, real-world object is included among following item At least one:The object specified close to the object of the body of user, labeled object, user, head The application running on head mounted displays is currently needed for the object needed for the object of use, operational controls.
4. method according to claim 2, wherein, at least one among following condition is expired When sufficient, determine and need to assume real-world object to user:Receive the user input that request assumes real-world object; Determine that the real-world object around user is matched with the default object that presents;Virtual field-of-view image shows Application interface in detect need using real-world object execution operation control;The body of user is detected Position is close to real-world object;Body part user is detected moves towards real-world object;Determine wear-type The application running on display is currently needed for using real-world object;Determine and reach around default and user The time that real-world object interacts.
5. method according to claim 2, also includes:(C) among following condition at least One when being satisfied, assumes the virtual field-of-view image of the binocular view not being superimposed with real-world object to user: Receive the user input terminating assuming real-world object;Determine that the real-world object around user with default is in Existing object mismatches;It is not detected by the application interface of display in virtual field-of-view image needing using true The control of real object execution operation;The body part determining user is away from real-world object;Determine that wear-type shows Show that the application running on device does not currently need to use real-world object;Determine that user does not have during preset time period Have and execute operation using real-world object;Determine that user can execute behaviour in the case of not watching real-world object Make.
6. the method according to any claim among claim 1 to 5, wherein, step (A) Including:Capture the image including the real-world object around user by single filming apparatus, according to The Image Acquisition of capture is located at the binocular view of the real-world object around user.
7. method according to claim 6, wherein, in step (A), from the image of capture Middle detection real-world object image, determines the real-world object of another viewpoint based on the real-world object image detecting Image, and obtained based on the real-world object image of the real-world object image detecting and described another viewpoint The binocular view of real-world object.
8. method according to claim 7, wherein, based on the real-world object image detecting and institute The real-world object image stating another viewpoint includes come the step to obtain the binocular view of real-world object:Based on institute State the position relationship between single filming apparatus and user's binocular, to the real-world object image detecting and institute The real-world object image stating another viewpoint carries out view-point correction to obtain the binocular view of real-world object.
9. the method according to any claim among claim 1 to 5, wherein, step (A) Including:Capture the image including the real-world object around user by filming apparatus, from capture Detect real-world object image in image, the double of real-world object are obtained based on the real-world object image detecting Eye diagram, wherein, described filming apparatus include depth camera, or, described filming apparatus include to Few two single view photographic head.
10. method according to claim 9, wherein, based on the real-world object image detecting Lai The step obtaining the binocular view of real-world object includes:Based between described filming apparatus and user's binocular Position relationship, carries out, to the real-world object image detecting, the binocular vision that view-point correction to obtain real-world object Figure.
11. methods according to any claim among claim 6 to 10, wherein, in inspection When not detecting the real-world object image of desired display object among real-world object, expand shooting visual angle to weigh New capture includes the image of desired display object, or, point out user to turn to what desired display object was located The image that recapture includes desired display object is carried out in orientation.
12. methods according to any claim among claim 1 to 11, wherein, in step Suddenly in (B), obtain the virtual scene image of reflection virtual scene, and pass through the binocular vision of real-world object Figure and virtual scene image are overlapped producing virtual field-of-view image.
13. methods according to claim 12, wherein, in step (B), in virtual scene In the case that dummy object in image and real-world object exist on three dimensions and block, to dummy object Zoom in and out and/or mobile.
14. methods according to claim 12, wherein, in step (B), by real-world object It is shown as at least one among translucent, contour line or 3D grid lines.
15. methods according to claim 12, wherein, in step (B), according to following aobvious Show that the binocular view of real-world object is added to the virtual field-of-view image of head mounted display by one of mode:Only The binocular view of display real-world object and do not show virtual scene image, only show virtual scene image and not The binocular view of display real-world object, by the binocular view of real-world object and virtual scene image spatially Merge show, the binocular view of real-world object be shown on virtual scene image in a form of picture-in-picture, Virtual scene image is shown on the binocular view of real-world object in a form of picture-in-picture.
16. head mounted displays according to any one claim among claim 1 to 15, Wherein, in step (B), operation according to user is adding and/or to delete the virtual cyclogram that is added to The real-world object of picture.
17. methods according to any claim among claim 1 to 16, also include:
(D) obtain the display project with regard to internet of things equipment, and the display project of acquisition is added to void At least one of intend field-of-view image, wherein, the following item of described display project expression thing networked devices: Operation interface, mode of operation, announcement information, configured information.
18. methods according to claim 17, wherein, by least one among following process To obtain the display project with regard to internet of things equipment:The Internet of Things that capture is located in the true visual field of user sets Standby image, and from capture internet of things equipment image zooming-out with regard to internet of things equipment display project; Receive from the internet of things equipment in the true visual field of user and/or the true visual field and set with regard to Internet of Things Standby display project;The internet of things equipment that sensing is located at outside the true visual field of user shows with respect to wear-type The position of device is as configured information.
19. methods according to claim 17, also include:(E) display items are directed to according to user Purpose operation carrys out the corresponding process of remote control internet of things equipment execution.
A kind of 20. head mounted displays of display real-world object, including:
Real-world object view acquisition device, obtains the binocular view of the real-world object being located at around user;With And
Display device, assumes the virtual field-of-view image of the binocular view being superimposed with real-world object to user.
CN201510549225.7A 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display Active CN106484085B (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201910549634.5A CN110275619A (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display
CN201510549225.7A CN106484085B (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display
KR1020160106177A KR20170026164A (en) 2015-08-31 2016-08-22 Virtual reality display apparatus and display method thereof
PCT/KR2016/009711 WO2017039308A1 (en) 2015-08-31 2016-08-31 Virtual reality display apparatus and display method thereof
US15/252,853 US20170061696A1 (en) 2015-08-31 2016-08-31 Virtual reality display apparatus and display method thereof
EP16842274.9A EP3281058A4 (en) 2015-08-31 2016-08-31 Virtual reality display apparatus and display method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510549225.7A CN106484085B (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201910549634.5A Division CN110275619A (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display

Publications (2)

Publication Number Publication Date
CN106484085A true CN106484085A (en) 2017-03-08
CN106484085B CN106484085B (en) 2019-07-23

Family

ID=58236359

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910549634.5A Pending CN110275619A (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display
CN201510549225.7A Active CN106484085B (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910549634.5A Pending CN110275619A (en) 2015-08-31 2015-08-31 The method and its head-mounted display of real-world object are shown in head-mounted display

Country Status (3)

Country Link
EP (1) EP3281058A4 (en)
KR (1) KR20170026164A (en)
CN (2) CN110275619A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106896925A (en) * 2017-04-14 2017-06-27 陈柳华 The device that a kind of virtual reality is merged with real scene
CN107168515A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 The localization method and device of handle in a kind of VR all-in-ones
CN107222689A (en) * 2017-05-18 2017-09-29 歌尔科技有限公司 Outdoor scene switching method and device based on VR camera lenses
CN107229342A (en) * 2017-06-30 2017-10-03 宇龙计算机通信科技(深圳)有限公司 Document handling method and user equipment
CN107422942A (en) * 2017-08-15 2017-12-01 吴金河 A kind of control system and method for immersion experience
CN107577337A (en) * 2017-07-25 2018-01-12 北京小鸟看看科技有限公司 A kind of keyboard display method for wearing display device, device and wear display device
CN108040247A (en) * 2017-12-29 2018-05-15 湖南航天捷诚电子装备有限责任公司 A kind of wear-type augmented reality display device and method
CN108169901A (en) * 2017-12-27 2018-06-15 北京传嘉科技有限公司 VR glasses
CN108519676A (en) * 2018-04-09 2018-09-11 杭州瑞杰珑科技有限公司 A kind of wear-type helps view apparatus
CN108572723A (en) * 2018-02-02 2018-09-25 陈尚语 A kind of carsickness-proof method and apparatus
CN108764152A (en) * 2018-05-29 2018-11-06 北京物灵智能科技有限公司 The method, apparatus and storage device of interactive prompt are realized based on picture match
CN108922115A (en) * 2018-06-26 2018-11-30 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN108960008A (en) * 2017-05-22 2018-12-07 华为技术有限公司 Method and apparatus that VR is shown, VR equipment
CN110402578A (en) * 2017-03-22 2019-11-01 索尼公司 Image processing apparatus, methods and procedures
CN110475103A (en) * 2019-09-05 2019-11-19 上海临奇智能科技有限公司 A kind of wear-type visual device
CN110998491A (en) * 2017-08-02 2020-04-10 微软技术许可有限责任公司 Transitioning into a VR environment and alerting HMD users of real-world physical obstacles
CN111201474A (en) * 2017-10-12 2020-05-26 奥迪股份公司 Method for operating a head-wearable electronic display device and display system for displaying virtual content
CN111338466A (en) * 2018-12-19 2020-06-26 西门子医疗有限公司 Method and apparatus for controlling virtual reality display unit
CN111427447A (en) * 2020-03-04 2020-07-17 青岛小鸟看看科技有限公司 Display method of virtual keyboard, head-mounted display equipment and system
CN111448568A (en) * 2017-09-29 2020-07-24 苹果公司 Context-based application demonstration
CN111801725A (en) * 2018-09-12 2020-10-20 株式会社阿尔法代码 Image display control device and image display control program
CN111831105A (en) * 2019-04-15 2020-10-27 未来市股份有限公司 Head-mounted display system, related method and related computer readable recording medium
CN111831110A (en) * 2019-04-15 2020-10-27 苹果公司 Keyboard operation of head-mounted device
CN111831106A (en) * 2019-04-15 2020-10-27 未来市股份有限公司 Head-mounted display system, related method and related computer readable recording medium
CN112055193A (en) * 2019-06-05 2020-12-08 联发科技股份有限公司 View synthesis method and corresponding device
CN112581054A (en) * 2020-12-09 2021-03-30 珠海格力电器股份有限公司 Material management method and material management device
CN114972692A (en) * 2022-05-12 2022-08-30 北京领为军融科技有限公司 Target positioning method based on AI identification and mixed reality
WO2023130435A1 (en) * 2022-01-10 2023-07-13 深圳市闪至科技有限公司 Interaction method, head-mounted display device, and system and storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102389185B1 (en) 2017-10-17 2022-04-21 삼성전자주식회사 Electronic device and method for executing function using input interface displayed via at least portion of content
KR102076647B1 (en) * 2018-03-30 2020-02-12 데이터얼라이언스 주식회사 IoT Device Control System And Method Using Virtual reality And Augmented Reality
US11880911B2 (en) 2018-09-07 2024-01-23 Apple Inc. Transitioning between imagery and sounds of a virtual environment and a real environment
CN111124112A (en) * 2019-12-10 2020-05-08 北京一数科技有限公司 Interactive display method and device for virtual interface and entity object
CN112445341B (en) * 2020-11-23 2022-11-08 青岛小鸟看看科技有限公司 Keyboard perspective method and device of virtual reality equipment and virtual reality equipment
CN112462937B (en) * 2020-11-23 2022-11-08 青岛小鸟看看科技有限公司 Local perspective method and device of virtual reality equipment and virtual reality equipment
CN114035732A (en) * 2021-11-04 2022-02-11 海南诺亦腾海洋科技研究院有限公司 Method and device for controlling virtual experience content of VR head display equipment by one key
CN116744195B (en) * 2023-08-10 2023-10-31 苏州清听声学科技有限公司 Parametric array loudspeaker and directional deflection method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103975268A (en) * 2011-10-07 2014-08-06 谷歌公司 Wearable computer with nearby object response
WO2015092968A1 (en) * 2013-12-19 2015-06-25 Sony Corporation Head-mounted display device and image display method
WO2015111283A1 (en) * 2014-01-23 2015-07-30 ソニー株式会社 Image display device and image display method

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6037882A (en) * 1997-09-30 2000-03-14 Levy; David H. Method and apparatus for inputting data to an electronic system
GB2376397A (en) * 2001-06-04 2002-12-11 Hewlett Packard Co Virtual or augmented reality
JP2005044102A (en) * 2003-07-28 2005-02-17 Canon Inc Image reproduction method and device
JP2009025918A (en) * 2007-07-17 2009-02-05 Canon Inc Image processor and image processing method
CN101893935B (en) * 2010-07-14 2012-01-11 北京航空航天大学 Cooperative construction method for enhancing realistic table-tennis system based on real rackets
US8884984B2 (en) * 2010-10-15 2014-11-11 Microsoft Corporation Fusing virtual content into real content
JP2012173772A (en) * 2011-02-17 2012-09-10 Panasonic Corp User interaction apparatus, user interaction method, user interaction program and integrated circuit
JP5960796B2 (en) * 2011-03-29 2016-08-02 クアルコム,インコーポレイテッド Modular mobile connected pico projector for local multi-user collaboration
US9547438B2 (en) * 2011-06-21 2017-01-17 Empire Technology Development Llc Gesture based user interface for augmented reality
JP5765133B2 (en) * 2011-08-16 2015-08-19 富士通株式会社 Input device, input control method, and input control program
US8941560B2 (en) * 2011-09-21 2015-01-27 Google Inc. Wearable computer with superimposed controls and instructions for external device
CN103018905A (en) * 2011-09-23 2013-04-03 奇想创造事业股份有限公司 Head-mounted somatosensory manipulation display system and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103975268A (en) * 2011-10-07 2014-08-06 谷歌公司 Wearable computer with nearby object response
WO2015092968A1 (en) * 2013-12-19 2015-06-25 Sony Corporation Head-mounted display device and image display method
WO2015111283A1 (en) * 2014-01-23 2015-07-30 ソニー株式会社 Image display device and image display method

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110402578B (en) * 2017-03-22 2022-05-03 索尼公司 Image processing apparatus, method and recording medium
CN110402578A (en) * 2017-03-22 2019-11-01 索尼公司 Image processing apparatus, methods and procedures
US11308670B2 (en) 2017-03-22 2022-04-19 Sony Corporation Image processing apparatus and method
CN107168515A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 The localization method and device of handle in a kind of VR all-in-ones
CN106896925A (en) * 2017-04-14 2017-06-27 陈柳华 The device that a kind of virtual reality is merged with real scene
CN107222689A (en) * 2017-05-18 2017-09-29 歌尔科技有限公司 Outdoor scene switching method and device based on VR camera lenses
CN107222689B (en) * 2017-05-18 2020-07-03 歌尔科技有限公司 Real scene switching method and device based on VR (virtual reality) lens
CN108960008B (en) * 2017-05-22 2021-12-14 华为技术有限公司 VR display method and device and VR equipment
CN108960008A (en) * 2017-05-22 2018-12-07 华为技术有限公司 Method and apparatus that VR is shown, VR equipment
CN107229342A (en) * 2017-06-30 2017-10-03 宇龙计算机通信科技(深圳)有限公司 Document handling method and user equipment
CN107577337A (en) * 2017-07-25 2018-01-12 北京小鸟看看科技有限公司 A kind of keyboard display method for wearing display device, device and wear display device
CN110998491A (en) * 2017-08-02 2020-04-10 微软技术许可有限责任公司 Transitioning into a VR environment and alerting HMD users of real-world physical obstacles
CN107422942A (en) * 2017-08-15 2017-12-01 吴金河 A kind of control system and method for immersion experience
CN111448568A (en) * 2017-09-29 2020-07-24 苹果公司 Context-based application demonstration
CN111448568B (en) * 2017-09-29 2023-11-14 苹果公司 Environment-based application presentation
CN111201474A (en) * 2017-10-12 2020-05-26 奥迪股份公司 Method for operating a head-wearable electronic display device and display system for displaying virtual content
US11364441B2 (en) 2017-10-12 2022-06-21 Audi Ag Method for operating an electronic display device wearable on the head and display system for displaying virtual content
CN108169901A (en) * 2017-12-27 2018-06-15 北京传嘉科技有限公司 VR glasses
CN108040247A (en) * 2017-12-29 2018-05-15 湖南航天捷诚电子装备有限责任公司 A kind of wear-type augmented reality display device and method
CN108572723A (en) * 2018-02-02 2018-09-25 陈尚语 A kind of carsickness-proof method and apparatus
CN108572723B (en) * 2018-02-02 2021-01-29 陈尚语 Carsickness prevention method and equipment
CN108519676A (en) * 2018-04-09 2018-09-11 杭州瑞杰珑科技有限公司 A kind of wear-type helps view apparatus
CN108764152B (en) * 2018-05-29 2020-12-04 北京物灵智能科技有限公司 Method and device for realizing interactive prompt based on picture matching and storage equipment
CN108764152A (en) * 2018-05-29 2018-11-06 北京物灵智能科技有限公司 The method, apparatus and storage device of interactive prompt are realized based on picture match
CN108922115A (en) * 2018-06-26 2018-11-30 联想(北京)有限公司 A kind of information processing method and electronic equipment
US11030821B2 (en) 2018-09-12 2021-06-08 Alpha Code Inc. Image display control apparatus and image display control program
CN111801725A (en) * 2018-09-12 2020-10-20 株式会社阿尔法代码 Image display control device and image display control program
CN111338466B (en) * 2018-12-19 2023-08-04 西门子医疗有限公司 Method and device for controlling virtual reality display unit
CN111338466A (en) * 2018-12-19 2020-06-26 西门子医疗有限公司 Method and apparatus for controlling virtual reality display unit
CN111831106A (en) * 2019-04-15 2020-10-27 未来市股份有限公司 Head-mounted display system, related method and related computer readable recording medium
CN111831105A (en) * 2019-04-15 2020-10-27 未来市股份有限公司 Head-mounted display system, related method and related computer readable recording medium
CN111831110A (en) * 2019-04-15 2020-10-27 苹果公司 Keyboard operation of head-mounted device
CN112055193A (en) * 2019-06-05 2020-12-08 联发科技股份有限公司 View synthesis method and corresponding device
US11792352B2 (en) 2019-06-05 2023-10-17 Mediatek Inc. Camera view synthesis on head-mounted display for virtual reality and augmented reality
CN110475103A (en) * 2019-09-05 2019-11-19 上海临奇智能科技有限公司 A kind of wear-type visual device
CN111427447A (en) * 2020-03-04 2020-07-17 青岛小鸟看看科技有限公司 Display method of virtual keyboard, head-mounted display equipment and system
CN111427447B (en) * 2020-03-04 2023-08-29 青岛小鸟看看科技有限公司 Virtual keyboard display method, head-mounted display device and system
CN112581054B (en) * 2020-12-09 2023-08-29 珠海格力电器股份有限公司 Material management method and material management device
CN112581054A (en) * 2020-12-09 2021-03-30 珠海格力电器股份有限公司 Material management method and material management device
WO2023130435A1 (en) * 2022-01-10 2023-07-13 深圳市闪至科技有限公司 Interaction method, head-mounted display device, and system and storage medium
CN114972692A (en) * 2022-05-12 2022-08-30 北京领为军融科技有限公司 Target positioning method based on AI identification and mixed reality

Also Published As

Publication number Publication date
EP3281058A4 (en) 2018-04-11
CN106484085B (en) 2019-07-23
CN110275619A (en) 2019-09-24
KR20170026164A (en) 2017-03-08
EP3281058A1 (en) 2018-02-14

Similar Documents

Publication Publication Date Title
CN106484085A (en) Method and its head mounted display of real-world object is shown in head mounted display
US20170061696A1 (en) Virtual reality display apparatus and display method thereof
US10474336B2 (en) Providing a user experience with virtual reality content and user-selected, real world objects
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
CN104081317B (en) Information processing equipment and information processing method
US10776618B2 (en) Mobile terminal and control method therefor
KR20230066626A (en) Tracking of Hand Gestures for Interactive Game Control in Augmented Reality
US11151796B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
CN105981076B (en) Synthesize the construction of augmented reality environment
US20200258314A1 (en) Information processing device, information processing method, and recording medium
US11423627B2 (en) Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements
CN106134186A (en) Distant existing experience
CN106104650A (en) Remote Device Control is carried out via gaze detection
CN107390863A (en) Control method and device, electronic equipment, the storage medium of equipment
CN109997098A (en) Device, associated method and associated computer-readable medium
CN110196640A (en) A kind of method of controlling operation thereof and terminal
CN106575152A (en) Alignable user interface
US20220398816A1 (en) Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
CN111630478A (en) High-speed staggered binocular tracking system
US20220084303A1 (en) Augmented reality eyewear with 3d costumes
CN108462729A (en) Realize method and apparatus, terminal device and the server of terminal device interaction
CN105894571B (en) Method and device for processing multimedia information
WO2017104089A1 (en) Collaborative head-mounted display system, system including display device and head-mounted display, and display device
WO2023064719A1 (en) User interactions with remote devices
US20230386147A1 (en) Systems and Methods for Providing Real-Time Composite Video from Multiple Source Devices Featuring Augmented Reality Elements

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant