CN106971426A - A kind of method that virtual reality is merged with real scene - Google Patents

A kind of method that virtual reality is merged with real scene Download PDF

Info

Publication number
CN106971426A
CN106971426A CN201710242248.2A CN201710242248A CN106971426A CN 106971426 A CN106971426 A CN 106971426A CN 201710242248 A CN201710242248 A CN 201710242248A CN 106971426 A CN106971426 A CN 106971426A
Authority
CN
China
Prior art keywords
image
virtual reality
scene
real
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710242248.2A
Other languages
Chinese (zh)
Inventor
陈柳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710242248.2A priority Critical patent/CN106971426A/en
Publication of CN106971426A publication Critical patent/CN106971426A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The invention provides a kind of method that virtual reality is merged with real scene, by obtaining the image information inside virtual reality device, virtual reality scenario is generated;Obtain the real goal scene information of 3D camera acquisitions;According to the real goal scene information and virtual reality scenario, the generation fusion scene inside virtual reality device;The fusion scene is presented.Real scene can be combined during virtual reality to realize, the effect virtually merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.

Description

A kind of method that virtual reality is merged with real scene
Technical field
The present invention relates to technical field of virtual reality, the side that more particularly to a kind of virtual reality is merged with real scene Method.
Background technology
Virtual reality (VirtualReality, hereinafter referred to as VR) technology is main by comprehensively utilizing computer graphical system There is provided in the interface equipment such as system and various reality and control, three-dimensional environment generating on computers, can interacting and immerse sensation Technology.
Augmented reality (AugmentedReality, hereinafter referred to as AR) technology is a kind of by real world information and virtual generation The integrated new technology of boundary's information " seamless ", it script in the certain time spatial dimension of real world is difficult the reality experienced to be Body information (visual information, sound, taste, tactile etc.), by science and technology such as computers, is superimposed again after analog simulation, will be virtual Information application to real world, perceived by human sensory, so as to reach the sensory experience of exceeding reality.Real environment and Virtual object has been added to same picture in real time or space exists simultaneously.Augmented reality, is not only presented true The information in the world, and virtual information is shown simultaneously, two kinds of information are complementary to one another, are superimposed.In the enhancing of visualization In reality, user utilizes Helmet Mounted Display, is synthesized together real world and computer graphic are multiple, just it can be seen that real The world is around it.
A kind of Helmet Mounted Display of the prior art, such as, similar Oculus product can allow Consumer's Experience VR to be imitated Really, as the similar product of google glasses can allow Consumer's Experience AR effects.
Inventor has found during the embodiment of the present invention is realized:The existing VR helmets can watch virtual scene, Personage etc., but these virtual scene personages are pre-designed, or rendered according to special algorithm, not There is the scene when VR helmets are used with reference to user, lack the interaction with actual environment.And existing AR glasses are it can be seen that user True environment at the moment, and image can be analyzed, some prompt messages are provided, but can not experience what virtual scene true to nature was brought Pleasure, namely AR is difficult the combination for carrying out virtual reality.
The content of the invention
Based on this, it is necessary to provide a kind of method that virtual reality is merged with real scene, to realize in virtual reality mistake Real scene can be combined in journey, the effect virtually merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.
A kind of method that virtual reality is merged with real scene, including:
The image information inside virtual reality device is obtained, virtual reality scenario is generated;
Obtain the real goal scene information of 3D camera acquisitions;
According to the real goal scene information and virtual reality scenario, the generation fusion scene inside virtual reality device;
The fusion scene is presented.
In one of the embodiments, the image information obtained inside virtual reality device, generation virtual reality Scape, including:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is different Real field scape.
In one of the embodiments, the image to inside virtual reality device is read out, analyzes, recognizes, and Different virtual reality scenarios are generated using recognition result, including:
Image inside virtual reality device is read out;
The characteristic point that data analysis obtains image is carried out to the image read;
Image in the image characteristic point and database of acquisition is subjected to contrast and is identified result;
Different virtual reality scenarios are generated using the recognition result.
In one of the embodiments, the real goal scene information of the acquisition 3D camera acquisitions, including:
Follow the trail of the sight change of human eye;
Changed according to the sight of the human eye, adjust the 3D camera directions so that the direction of the 3D video cameras with it is described Direction of visual lines after human eye sight change is consistent;
Obtain the real goal scene information that the 3D video cameras are gathered in real time according to the direction after adjustment.
In one of the embodiments, it is described according to the real goal scene information and virtual reality scenario, in virtual Generation fusion scene inside real world devices, including:
An initial velocity vector formation image motion is assigned to each pixel in image;
Mobile state analysis is entered to image according to the velocity feature of each pixel;
Judge whether there is moving object in image, if not having moving object in image, light stream vector is in whole image region Consecutive variations;If there is moving object in image, there is relative motion, moving object institute shape in real goal scene and image background Into velocity it is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Obtain the new position of image characteristic point;
According to the image characteristic point of acquisition new position and home position, the physical parameter based on 3D cameras calculates three-dimensional space Translation, rotation and the scaled vectors of interior object;
Obtained translation, rotation and scaled vectors, which are assigned, by virtual reality scenario completes virtual reality scenario and real goal scene Fusion.
A kind of method that virtual reality is merged with real scene is provided in above-described embodiment, is set by obtaining virtual reality Standby internal image information, generates virtual reality scenario;Obtain the real goal scene information of 3D camera acquisitions;According to described Real goal scene information and virtual reality scenario, the generation fusion scene inside virtual reality device;The fusion field is presented Scape.Real scene can be combined during virtual reality to realize, the effect virtually merged with reality is realized, people can be promoted Machine interaction, lifting Consumer's Experience.
Brief description of the drawings
The method flow diagram that Fig. 1 merges for a kind of virtual reality in one embodiment with real scene;
The particular flow sheet that Fig. 2 is step S10 in Fig. 1;
The particular flow sheet that Fig. 3 is step S20 in Fig. 1.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Element and component in the description of specific distinct unless the context otherwise, the present invention, quantity both can be with single shape Formula is present, and form that can also be multiple is present, and the present invention is defined not to this.Although the step in the present invention is entered with label Arrangement is gone, but is not used to limit the precedence of step, unless expressly stated the order of step or holding for certain step Row is needed based on other steps, and otherwise the relative rank of step is adjustable.It is appreciated that used herein Term "and/or" is related to and covers one of associated Listed Items or one or more of any and all possible group Close.
It should be noted that real scene information includes the ambient condition information by 3D video camera captured in real-time, such as it is, left Right two cameras respectively according to user's right and left eyes direction of visual lines captured in real-time real scene image sequence, at a time An image is obtained in t, the image sequence that can be provided from left camera, as left figure, the image sequence provided from right camera An image is obtained in row, as right figure, wherein, left figure simulates the content that user's left eye is seen, it is right that right figure simulates user The content arrived soon.Virtual reality scenario information includes the image information of virtual reality model, such as, virtual reality scenario model Left view and right view.
In embodiments of the present invention, augmented reality scene refer to be in by real scene information using augmented reality Existing scene, virtual reality scenario refers to the scene that virtual reality scenario information is presented using virtual reality technology.
In embodiments of the present invention, virtual reality device can be Intelligent worn device, and Intelligent worn device can be wrapped Include the head-wearing type intelligent equipment for possessing AR and VR functions, such as, intelligent glasses or the helmet.
In one embodiment, as shown in figure 1, a kind of method that virtual reality is merged with real scene, including:
Image information inside S10, acquisition virtual reality device, generates virtual reality scenario;
Specifically, virtual reality device is read out to the image of its inside, analyzes, recognized, and using recognition result generation not Same virtual reality scenario.
In one of the embodiments, as shown in Fig. 2 the step S10 includes:
S101, the image inside virtual reality device is read out;
S102, the characteristic point that data analysis acquisition image is carried out to the image read;
S103, by the image characteristic point and database of acquisition image carry out contrast be identified result;
S104, the recognition result is utilized to generate different virtual reality scenarios.
In actual applications, after by system start-up initialisation, system reads virtual reality by image fetching unit and set The specified image of standby middle access;The image file accessed in virtual reality device be user by the photo of photography or It is the picture obtained by other approach, these photos and picture is stored into the image data base into virtual reality device In, supply subsequently needs to select the source of various images.
First the resolution ratio of image file can be unified during analyzing image, by its resolution ratio pressure Relatively low, such as resolution ratio 320*240 sizes are reduced to, image file is needed after by resolution adjustment to carry out format conversion, will The color format of image is converted into grayscale format, by convert form after the brightness of imagery exploitation two dimensional image change distance point or On the curve of image border with curvature maximum point analysis image angle point feature, and using the image Corner Feature of analysis as Image characteristic point.
Local random binary feature is recycled, the image in the characteristic point information and database of above-mentioned middle acquisition is calculated respectively Characterization information, their corresponding relations in two images are being judged by the description information of each angle point, remove The exterior point of erroneous matching in two pictures, retains the interior point correctly matched, when correct quantity of the north with characteristic point of reservation surpasses The threshold value of setting has been crossed, then has been judged as that identification is successfully entered next step;If identification is unsuccessful, weight in above-mentioned steps is returned The new circular treatment that carried out to picture is untill recognizing successfully.
The target designation being identified using the recognition result of above-mentioned steps, phase is retrieved according to numbering in database Corresponding virtual content, and generate virtual reality scenario.
S20, the real goal scene information for obtaining 3D camera acquisitions;
In one of the embodiments, as shown in figure 3, the step S20 includes:
S201, the sight change for following the trail of human eye;
S202, changed according to the sight of the human eye, adjust the 3D camera directions so that the direction of the 3D video cameras with Direction of visual lines after the human eye sight change is consistent;
The real goal scene information that S203, the acquisition 3D video cameras are gathered in real time according to the direction after adjustment.
In embodiments of the present invention, the real goal scene information of 3D camera acquisitions is obtained, is specifically included:Follow the trail of human eye Sight change, changed according to the sight of the human eye, adjust the 3D video cameras dual camera direction, so that double shootings Direction of visual lines after the direction of head changes with the human eye sight is consistent, obtains the dual camera real according to the direction after adjustment When the real scene information that gathers.In order to realize dual camera simulation human eye shoot real scene information, it is necessary to camera according to Human eye sight direction, gathers real scene information.In order to which the sight for obtaining human eye changes, eye can be installed in VR inner helmets Eye-controlling focus module, to follow the trail of sight change.In order to allow two cameras being capable of scene that preferably simulated dual is arrived soon, intelligence The processor of wearable device such as VR inner helmets needs to adjust the shooting of left and right two respectively according to eyes sight running parameter The viewing angle of head.The real-time acquisition of dual camera picture is simultaneously presented respectively to right and left eyes, can now reappear the viewing of human eye Effect.Specifically, it is possible to use eye tracking technology of the prior art, for example according to eyeball and the changing features on eyeball periphery Be tracked, be tracked according to iris angle change, actively project the light beams such as infrared ray to iris extract feature carry out with The sight change of track to determine human eye etc..Certainly, not limited to this of the embodiment of the present invention, under the technical concept of the present invention, ability The sight that field technique personnel can follow the trail of human eye using any feasible technology changes and then the right and left eyes of adjustment simulation human eye are taken the photograph As the collection direction of head, real scene information is gathered in real time.
S30, according to the real goal scene information and virtual reality scenario, the generation fusion inside virtual reality device Scene;
In one embodiment of the invention, it is described according to the real goal scene information and virtual reality scenario, in virtual Generation fusion scene, may particularly include inside real world devices:
The left figure that the left camera is shot is superimposed with the left view of virtual scene, synthesis fusion scene left figure;
The right figure that the right camera is shot is superimposed with the right view of virtual scene, synthesis fusion scene right figure;
According to the fusion scene left figure and right figure, generation fusion scene.
Specifically, by virtual scene information and real scene information superposition, such as, by dummy model information superposition to true The camera of during real field scape, it is necessary to left and right two provides the real-time image sequence of real scene, and at a time t, can take the photograph from a left side One image is provided in the image sequence provided as head, as left figure, one is obtained in the image sequence provided from right camera Image, is used as right figure.Left figure simulates the content that left eye is seen, right figure simulates the content that right eye is seen.Left and right camera is carried For real-time image sequence, these image sequences can be obtained by a variety of methods, and a kind of method is carried using camera manufacturer The SDK (SoftwareDevelopmentKit) of confession carries out image acquisition, and another method is using some conventional works of increasing income Tool reads image, such as Opencv from camera.In order to obtain the hierarchical relationship of real scene, can calculate after parallax, with regarding The hierarchical relationship of difference represents the hierarchical relationship of scene.The parallax between the figure of left and right is calculated, BM, figure can be used to cut, ADCensus Calculated Deng any one parallax calculation method.There is parallax just to know scene hierarchical information, the hierarchical information of scene The referred to as depth of view information of scene, depth of view information can be used for instructing merging for dummy model and real scene, allow dummy model more Adduction reason is put into real scene.Specific method is, dummy model left and right figure minimum parallax than dummy model in left and right The maximum disparity of the overlay area of figure is big, and using needing to carry out median smoothing to parallax information before parallax.In left figure and Dummy model is separately added into right figure, if minimum parallax of the dummy model in the figure of left and right is d, d needs to cover more than dummy model The maximum disparity of cover area.The corresponding left view of dummy model is added in left figure, the corresponding right view of dummy model is folded It is added in right figure, it is possible to generation fusion scene.
It is in another embodiment of the present invention, described according to the real goal scene information and virtual reality scenario, Generation fusion scene, may particularly include inside virtual reality device:
An initial velocity vector formation image motion is assigned to each pixel in image;
Mobile state analysis is entered to image according to the velocity feature of each pixel;
Judge whether there is moving object in image, if not having moving object in image, light stream vector is in whole image region Consecutive variations;If there is moving object in image, there is relative motion, moving object institute shape in real goal scene and image background Into velocity it is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Obtain the new position of image characteristic point;
According to the image characteristic point of acquisition new position and home position, the physical parameter based on 3D cameras calculates three-dimensional space Translation, rotation and the scaled vectors of interior object;
Obtained translation, rotation and scaled vectors, which are assigned, by virtual reality scenario completes virtual reality scenario and real goal scene Fusion.
Specifically, each pixel assigns an initial velocity vector in image, forms it into scene image motion , in the particular moment of operation, correspond the point on point and the three-dimensional body on its image, this corresponding relation can be by Projection relation is obtained, and vector characteristic is read according to each pixel, and Mobile state analysis is entered to image, judges whether have in image The object of motion, if not having object in image in motion, light stream vector is consecutive variations in whole image region;If There is the object of motion in image, then target and image background have relative motion, the velocity that moving object is formed is inevitable It is different with neighborhood background vector, so that moving object and position are detected, the new position for making it obtain scene image characteristic point.
By still image change into virtual content and dynamic real scene get ready after, in picture pick-up device space The virtual content of above-mentioned identification is placed in the characteristic point locus of tracking, virtual content is merged with real scene;Again The scene image characteristic point that is obtained according to above-mentioned steps new position and home position, the physical parameter according to camera are calculated Translation, rotation and the scaled vectors of object in three-dimensional image space, at this moment the virtual content excited is assigned calculate In three dimensions in translation, rotation and the scaled vectors of object, the complete fusion of virtual content and real scene is achieved that.
In the present embodiment, it can recognize the picture to excite virtual content by using single picture as input source; Scene characteristic tracer technique is utilized simultaneously, virtual content is placed in the true environment of user, so as to realize the effect of augmented reality Really, the limitation that characteristic image excites virtual content is relieved, the development of industry is promoted.
S40, the presentation fusion scene.
In one of embodiment of the invention, the left figure of dummy model left view will be superimposed with, and be superimposed with virtual The right figure of model right view sends into display together after being synthesized, show respectively in the left-half and right half part of display Show, you can the fusion scene is presented, so, user is watched by right and left eyes respectively, now can just experience real scene with The good fusion of dummy model.
In embodiments of the present invention, except realizing real scene information and virtual scene information fusion, generation fusion scene Outside, the real scene information that can also be gathered according to the 3D video cameras dual camera, generates augmented reality scene, or, root According to the virtual reality scenario information, virtual reality scenario is generated, in embodiments of the present invention, generation augmented reality scene or void Intend reality scene, i.e. AR functions or VR functions, those skilled in the art combine the embodiment of the present invention, it is possible to achieve, herein no longer Repeat.
A kind of method that virtual reality is merged with real scene is provided in above-described embodiment, is set by obtaining virtual reality Standby internal image information, generates virtual reality scenario;Obtain the real goal scene information of 3D camera acquisitions;According to described Real goal scene information and virtual reality scenario, the generation fusion scene inside virtual reality device;The fusion field is presented Scape.Real scene can be combined during virtual reality to realize, the effect virtually merged with reality is realized, people can be promoted Machine interaction, lifting Consumer's Experience.
Embodiment described above only expresses the several embodiments of the present invention, and it describes more specific and detailed, but simultaneously Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (5)

1. a kind of method that virtual reality is merged with real scene, it is characterised in that including:
The image information inside virtual reality device is obtained, virtual reality scenario is generated;
Obtain the real goal scene information of 3D camera acquisitions;
According to the real goal scene information and virtual reality scenario, the generation fusion scene inside virtual reality device;
The fusion scene is presented.
2. according to the method described in claim 1, it is characterised in that the image information obtained inside virtual reality device, Virtual reality scenario is generated, including:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is different Real field scape.
3. method according to claim 2, it is characterised in that the image to inside virtual reality device is read Take, analyze, recognize, and different virtual reality scenarios are generated using recognition result, including:
Image inside virtual reality device is read out;
The characteristic point that data analysis obtains image is carried out to the image read;
Image in the image characteristic point and database of acquisition is subjected to contrast and is identified result;
Different virtual reality scenarios are generated using the recognition result.
4. according to the method described in claim 1, it is characterised in that the real goal scene letter of the acquisition 3D camera acquisitions Breath, including:
Follow the trail of the sight change of human eye;
Changed according to the sight of the human eye, adjust the 3D camera directions so that the direction of the 3D video cameras with it is described Direction of visual lines after human eye sight change is consistent;
Obtain the real goal scene information that the 3D video cameras are gathered in real time according to the direction after adjustment.
5. method according to claim 4, it is characterised in that described according to the real goal scene information and virtual existing Real field scape, the generation fusion scene inside virtual reality device, including:
An initial velocity vector formation image motion is assigned to each pixel in image;
Mobile state analysis is entered to image according to the velocity feature of each pixel;
Judge whether there is moving object in image, if not having moving object in image, light stream vector is in whole image region Consecutive variations;If there is moving object in image, there is relative motion, moving object institute shape in real goal scene and image background Into velocity it is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Obtain the new position of image characteristic point;
According to the image characteristic point of acquisition new position and home position, the physical parameter based on 3D cameras calculates three-dimensional space Translation, rotation and the scaled vectors of interior object;
Obtained translation, rotation and scaled vectors, which are assigned, by virtual reality scenario completes virtual reality scenario and real goal scene Fusion.
CN201710242248.2A 2017-04-14 2017-04-14 A kind of method that virtual reality is merged with real scene Pending CN106971426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710242248.2A CN106971426A (en) 2017-04-14 2017-04-14 A kind of method that virtual reality is merged with real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710242248.2A CN106971426A (en) 2017-04-14 2017-04-14 A kind of method that virtual reality is merged with real scene

Publications (1)

Publication Number Publication Date
CN106971426A true CN106971426A (en) 2017-07-21

Family

ID=59332923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710242248.2A Pending CN106971426A (en) 2017-04-14 2017-04-14 A kind of method that virtual reality is merged with real scene

Country Status (1)

Country Link
CN (1) CN106971426A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995481A (en) * 2017-11-30 2018-05-04 贵州颐爱科技有限公司 The display methods and device of a kind of mixed reality
CN108492374A (en) * 2018-01-30 2018-09-04 青岛中兴智能交通有限公司 The application process and device of a kind of AR on traffic guidance
CN110755083A (en) * 2019-10-21 2020-02-07 广东省人民医院(广东省医学科学院) Rehabilitation training method and motion evaluation equipment based on virtual reality
CN111862866A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium
CN115212565A (en) * 2022-08-02 2022-10-21 领悦数字信息技术有限公司 Method, apparatus, and medium for setting virtual environment in virtual scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130005899A (en) * 2011-07-07 2013-01-16 박태훈 Fourth dimension virtual reality system
CN104156998A (en) * 2014-08-08 2014-11-19 深圳中科呼图信息技术有限公司 Implementation method and system based on fusion of virtual image contents and real scene
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130005899A (en) * 2011-07-07 2013-01-16 박태훈 Fourth dimension virtual reality system
CN104156998A (en) * 2014-08-08 2014-11-19 深圳中科呼图信息技术有限公司 Implementation method and system based on fusion of virtual image contents and real scene
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995481A (en) * 2017-11-30 2018-05-04 贵州颐爱科技有限公司 The display methods and device of a kind of mixed reality
CN107995481B (en) * 2017-11-30 2019-11-15 贵州颐爱科技有限公司 A kind of display methods and device of mixed reality
CN108492374A (en) * 2018-01-30 2018-09-04 青岛中兴智能交通有限公司 The application process and device of a kind of AR on traffic guidance
CN108492374B (en) * 2018-01-30 2022-05-27 青岛中兴智能交通有限公司 Application method and device of AR (augmented reality) in traffic guidance
CN110755083A (en) * 2019-10-21 2020-02-07 广东省人民医院(广东省医学科学院) Rehabilitation training method and motion evaluation equipment based on virtual reality
CN111862866A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Image display method, device, equipment and computer readable storage medium
CN115212565A (en) * 2022-08-02 2022-10-21 领悦数字信息技术有限公司 Method, apparatus, and medium for setting virtual environment in virtual scene
CN115212565B (en) * 2022-08-02 2024-03-26 领悦数字信息技术有限公司 Method, apparatus and medium for setting virtual environment in virtual scene

Similar Documents

Publication Publication Date Title
CN106997618A (en) A kind of method that virtual reality is merged with real scene
CN106896925A (en) The device that a kind of virtual reality is merged with real scene
JP6644833B2 (en) System and method for rendering augmented reality content with albedo model
US11693242B2 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
US11776222B2 (en) Method for detecting objects and localizing a mobile computing device within an augmented reality experience
US20210344891A1 (en) System and method for generating combined embedded multi-view interactive digital media representations
CN109615703B (en) Augmented reality image display method, device and equipment
US10269177B2 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
CN102959616B (en) Interactive reality augmentation for natural interaction
CN106971426A (en) A kind of method that virtual reality is merged with real scene
JP4473754B2 (en) Virtual fitting device
CN109146965A (en) Information processing unit and computer program
CN108369653A (en) Use the eyes gesture recognition of eye feature
KR20170031733A (en) Technologies for adjusting a perspective of a captured image for display
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN107016730A (en) The device that a kind of virtual reality is merged with real scene
CN106981100A (en) The device that a kind of virtual reality is merged with real scene
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
US20200211275A1 (en) Information processing device, information processing method, and recording medium
KR101189043B1 (en) Service and method for video call, server and terminal thereof
CN107016729A (en) A kind of method that virtual reality is merged with real scene
Jian et al. Realistic face animation generation from videos
Li et al. A low-cost head and eye tracking system for realistic eye movements in virtual avatars
US11941171B1 (en) Eye gaze tracking method, apparatus and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170721