CN107016730A - The device that a kind of virtual reality is merged with real scene - Google Patents

The device that a kind of virtual reality is merged with real scene Download PDF

Info

Publication number
CN107016730A
CN107016730A CN201710242301.9A CN201710242301A CN107016730A CN 107016730 A CN107016730 A CN 107016730A CN 201710242301 A CN201710242301 A CN 201710242301A CN 107016730 A CN107016730 A CN 107016730A
Authority
CN
China
Prior art keywords
image
virtual reality
unit
real
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710242301.9A
Other languages
Chinese (zh)
Inventor
陈柳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710242301.9A priority Critical patent/CN107016730A/en
Publication of CN107016730A publication Critical patent/CN107016730A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The invention provides the device that a kind of virtual reality is merged with real scene, including:Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;Acquisition module, the real goal scene information for obtaining 3D camera acquisitions;Fusion Module, for according to the real goal scene information and virtual reality scenario, scene to be merged in generation inside virtual reality device.Real scene can be combined during virtual reality to realize, the effect virtually merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.

Description

The device that a kind of virtual reality is merged with real scene
Technical field
The present invention relates to technical field of virtual reality, the dress that more particularly to a kind of virtual reality is merged with real scene Put.
Background technology
Virtual reality (VirtualReality, hereinafter referred to as VR) technology is main by comprehensively utilizing computer graphical system There is provided in the interface equipment such as system and various reality and control, three-dimensional environment generating on computers, can interacting and immerse sensation Technology.
Augmented reality (AugmentedReality, hereinafter referred to as AR) technology is a kind of by real world information and virtual generation The integrated new technology of boundary's information " seamless ", it script in the certain time spatial dimension of real world is difficult the reality experienced to be Body information (visual information, sound, taste, tactile etc.), by science and technology such as computers, is superimposed again after analog simulation, will be virtual Information application to real world, perceived by human sensory, so as to reach the sensory experience of exceeding reality.Real environment and Virtual object has been added to same picture in real time or space exists simultaneously.Augmented reality, is not only presented true The information in the world, and virtual information is shown simultaneously, two kinds of information are complementary to one another, are superimposed.In the enhancing of visualization In reality, user utilizes Helmet Mounted Display, is synthesized together real world and computer graphic are multiple, just it can be seen that real The world is around it.
A kind of Helmet Mounted Display of the prior art, such as, similar Oculus product can allow Consumer's Experience VR to be imitated Really, as the similar product of google glasses can allow Consumer's Experience AR effects.
Inventor has found during the embodiment of the present invention is realized:The existing VR helmets can watch virtual scene, Personage etc., but these virtual scene personages are pre-designed, or rendered according to special algorithm, not There is the scene when VR helmets are used with reference to user, lack the interaction with actual environment.And existing AR glasses are it can be seen that user True environment at the moment, and image can be analyzed, some prompt messages are provided, but can not experience what virtual scene true to nature was brought Pleasure, namely AR is difficult the combination for carrying out virtual reality.
The content of the invention
Based on this, it is necessary to provide the device that a kind of virtual reality is merged with real scene, to realize in virtual reality mistake Real scene can be combined in journey, the effect virtually merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.
The device that a kind of virtual reality is merged with real scene, including:
Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Acquisition module, the real goal scene information for obtaining 3D camera acquisitions;
Fusion Module, it is raw inside virtual reality device for according to the real goal scene information and virtual reality scenario Into fusion scene.
In one of the embodiments, the acquisition module, specifically for:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is different Real field scape.
In one of the embodiments, the acquisition module, including:
Reading unit, for being read out to the image inside virtual reality device;
Analytic unit, for carrying out the characteristic point that data analysis obtains image to the image read;
Comparing unit, carries out contrast for image in the image characteristic point and database by acquisition and is identified result;
Generation unit, for generating different virtual reality scenarios using the recognition result.
In one of the embodiments, the acquisition module, including:
Tracing unit, the sight for following the trail of human eye changes;
Adjustment unit, for changing according to the sight of the human eye, adjusts the 3D camera directions, so that the 3D video cameras Direction and the human eye sight change after direction of visual lines it is consistent;
Collecting unit, for obtaining the real goal scene information that the 3D video cameras are gathered in real time according to the direction after adjustment.
In one of the embodiments, the Fusion Module, including:
Initial velocity given unit, for assigning an initial velocity vector formation image motion to each pixel in image ;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, for judging whether there is moving object in image, if not having moving object in image, light stream vector is whole Individual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background, The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, the position new for the image characteristic point according to acquisition and home position, the physics ginseng based on 3D cameras Number calculates the translation, rotation and scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning obtained translation, rotation and scaled vectors by virtual reality scenario Merged with real goal scene.
The device that a kind of virtual reality is merged with real scene is provided in above-described embodiment, including:Acquisition module, is used for The image information inside virtual reality device is obtained, virtual reality scenario is generated;Acquisition module, for obtaining 3D camera acquisitions Real goal scene information;Fusion Module, for according to the real goal scene information and virtual reality scenario, in virtual Generation fusion scene inside real world devices.Real scene can be combined during virtual reality to realize, is realized virtually with showing The effect of real fusion, can promote man-machine interaction, lifting Consumer's Experience.
Brief description of the drawings
The apparatus function module diagram that Fig. 1 merges for a kind of virtual reality in one embodiment with real scene;
Fig. 2 is the high-level schematic functional block diagram of acquisition module in Fig. 1;
Fig. 3 is the high-level schematic functional block diagram of acquisition module in Fig. 1.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Element and component in the description of specific distinct unless the context otherwise, the present invention, quantity both can be with single shape Formula is present, and form that can also be multiple is present, and the present invention is defined not to this.Although the step in the present invention is entered with label Arrangement is gone, but is not used to limit the precedence of step, unless expressly stated the order of step or holding for certain step Row is needed based on other steps, and otherwise the relative rank of step is adjustable.It is appreciated that used herein Term "and/or" is related to and covers one of associated Listed Items or one or more of any and all possible group Close.
It should be noted that real scene information includes the ambient condition information by 3D video camera captured in real-time, such as it is, left Right two cameras respectively according to user's right and left eyes direction of visual lines captured in real-time real scene image sequence, at a time An image is obtained in t, the image sequence that can be provided from left camera, as left figure, the image sequence provided from right camera An image is obtained in row, as right figure, wherein, left figure simulates the content that user's left eye is seen, it is right that right figure simulates user The content arrived soon.Virtual reality scenario information includes the image information of virtual reality model, such as, virtual reality scenario model Left view and right view.
In embodiments of the present invention, augmented reality scene refer to be in by real scene information using augmented reality Existing scene, virtual reality scenario refers to the scene that virtual reality scenario information is presented using virtual reality technology.
In embodiments of the present invention, virtual reality device can be Intelligent worn device, and Intelligent worn device can be wrapped Include the head-wearing type intelligent equipment for possessing AR and VR functions, such as, intelligent glasses or the helmet.
In one embodiment, as shown in figure 1, the device that a kind of virtual reality is merged with real scene, including:
Acquisition module 10, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Acquisition module 20, the real goal scene information for obtaining 3D camera acquisitions;
Fusion Module 30, for according to the real goal scene information and virtual reality scenario, inside virtual reality device Generation fusion scene.
In one of the embodiments, the acquisition module, specifically for:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is different Real field scape.
In one of the embodiments, as shown in Fig. 2 the acquisition module 10, including:
Reading unit 101, for being read out to the image inside virtual reality device;
Analytic unit 102, for carrying out the characteristic point that data analysis obtains image to the image read;
Comparing unit 103, carries out contrast for image in the image characteristic point and database by acquisition and is identified result;
Generation unit 104, for generating different virtual reality scenarios using the recognition result.
Specifically, after by system start-up initialisation, system reads what is accessed in virtual reality device by reading unit Specify image;The image file accessed in virtual reality device is user by the photo of photography or by other The picture that obtains of approach, by these photos and picture storage in image data base into virtual reality device, supply is follow-up Need to select the source of various images.
Analytic unit, the resolution ratio of image file first can be unified, by its resolution compression to relatively low, for example, divided Resolution 320*240 sizes, need to carry out format conversion to image file after by resolution adjustment, the color format of image are turned Grayscale format is turned to, the imagery exploitation two dimensional image brightness after form will be converted and change tool on the point or image border curve of distance There is the feature of the point analysis image angle point of curvature maximum, and image characteristic point is used as using the image Corner Feature of analysis.
Comparing unit, it is possible to use local random binary feature, calculate respectively above-mentioned middle acquisition characteristic point information and The characterization information of image in database, their pairs in two images is being judged by the description information of each angle point It should be related to, remove the exterior point of erroneous matching in two pictures, retain the interior point correctly matched, when feature is matched somebody with somebody in the correct north of reservation The quantity of point has exceeded the threshold value of setting, then is judged as that identification is successfully entered next step;It is again new if identification is unsuccessful Circular treatment is carried out to picture untill recognizing successfully.
Generation unit, the target designation that the result recognized using comparing unit is identified, according to numbering in database In retrieve corresponding virtual content, and generate virtual reality scenario.
In one of the embodiments, as shown in figure 3, the acquisition module 20, including:
Tracing unit 201, the sight for following the trail of human eye changes;
Adjustment unit 202, for changing according to the sight of the human eye, adjusts the 3D camera directions, so that the 3D takes the photograph Direction of visual lines after the direction of camera changes with the human eye sight is consistent;
Collecting unit 203, believes for obtaining the real goal scene that the 3D video cameras are gathered in real time according to the direction after adjustment Breath.
In embodiments of the present invention, the acquisition module, is specifically included:Tracing unit, tracing unit, and collecting unit, The sight for following the trail of human eye by tracing unit changes, and adjustment unit changes according to the sight of the human eye, adjusts the 3D shootings Machine dual camera direction, so that the direction of visual lines after the direction of the dual camera changes with the human eye sight is consistent, collection Unit obtains the real scene information that the dual camera is gathered in real time according to the direction after adjustment.In order to realize dual camera mould Anthropomorphic eye shoots real scene information, it is necessary to which camera is according to human eye sight direction, collection real scene information.In order to obtain people The sight change of eye, eye Eye-controlling focus module can be installed in VR inner helmets, to follow the trail of sight change.In order to allow two to take the photograph As head can scene that preferably simulated dual is arrived soon, the processor of Intelligent worn device such as VR inner helmets needed according to double An eye line running parameter come adjust respectively left and right two cameras viewing angle.The real-time acquisition of dual camera picture and difference Right and left eyes are presented to, can now reappear the viewing effect of human eye.Specifically, it is possible to use eye tracking skill of the prior art Art, for example, be tracked according to the changing features of eyeball and eyeball periphery, be tracked according to iris angle change, actively being projected The light beams such as infrared ray extract sight change that feature is tracked to determine human eye etc. to iris.Certainly, the embodiment of the present invention Not limited to this, under the technical concept of the present invention, those skilled in the art can follow the trail of human eye using any feasible technology Sight changes and then adjusted the collection direction of the right and left eyes camera of simulation human eye, in real time collection real scene information.
In one of the embodiments, the Fusion Module, including:
Initial velocity given unit, for assigning an initial velocity vector formation image motion to each pixel in image ;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, for judging whether there is moving object in image, if not having moving object in image, light stream vector is whole Individual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background, The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, the position new for the image characteristic point according to acquisition and home position, the physics ginseng based on 3D cameras Number calculates the translation, rotation and scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning obtained translation, rotation and scaled vectors by virtual reality scenario Merged with real goal scene.
Specifically, initial velocity given unit assigns an initial velocity vector to each pixel in image, makes It forms scene image sports ground, in the particular moment of operation, corresponds the point on point and the three-dimensional body on its image, This corresponding relation can be obtained by projection relation, and dynamic analytic unit reads vector characteristic according to each pixel, to figure As entering Mobile state analysis, judging unit judges the object for whether having motion in image, if not having object in image in motion, Light stream vector is consecutive variations in whole image region;If there is the object of motion in image, target and image background are deposited In relative motion, the velocity that moving object is formed is inevitable different with neighborhood background vector, so as to detect moving object And position, the new position of picture position acquiring unit acquisition scene image characteristic point.
By still image change into virtual content and dynamic real scene get ready after, in picture pick-up device space The virtual content of above-mentioned identification is placed in the characteristic point locus of tracking, virtual content is merged with real scene;Meter Unit is calculated according to the scene image characteristic point of acquisition new position and home position, the physical parameter according to camera is calculated Virtual content is assigned and calculated in three-dimensional by translation, rotation and the scaled vectors of object in three-dimensional image space, integrated unit In space in translation, rotation and the scaled vectors of object, the complete fusion of virtual content and real scene is achieved that.
In the present embodiment, it can recognize the picture to excite virtual content by using single picture as input source; Scene characteristic tracer technique is utilized simultaneously, virtual content is placed in the true environment of user, so as to realize the effect of augmented reality Really, the limitation that characteristic image excites virtual content is relieved, the development of industry is promoted.
In another embodiment of the present invention, the Fusion Module, may particularly include:
First superpositing unit, the left figure for the left camera to be shot is superimposed with the left view of virtual scene, synthesis fusion Scene left figure;
Second superpositing unit, the right figure for the right camera to be shot is superimposed with the right view of virtual scene, synthesis fusion Scene right figure;
Integrated unit, according to the fusion scene left figure and right figure, generation fusion scene.
Specifically, by by virtual scene information and real scene information superposition, such as, by dummy model information superposition The real-time image sequence of real scene is provided to the camera of during real scene, it is necessary to left and right two, at a time t, Ke Yicong An image is obtained in the image sequence that left camera is provided, as left figure, is obtained in the image sequence provided from right camera One image, is used as right figure.Left figure simulates the content that left eye is seen, right figure simulates the content that right eye is seen.Left and right is imaged Head provides real-time image sequence, and these image sequences can be obtained by a variety of methods, and a kind of method is to use camera factory The SDK (SoftwareDevelopmentKit) that business provides carries out image acquisition, and another method conventional is opened using some Source instrument reads image, such as Opencv from camera.In order to obtain the hierarchical relationship of real scene, it can calculate after parallax, The hierarchical relationship of scene is represented with the hierarchical relationship of parallax.The parallax between the figure of left and right is calculated, BM, figure can be used to cut, Any one parallax calculation method such as ADCensus is calculated.There is parallax just to know scene hierarchical information, the layer of scene Secondary information is also referred to as the depth of view information of scene, and depth of view information can be used for instructing merging for dummy model and real scene, allows void Analog model is more rationally put into real scene.Specific method is, dummy model left and right figure minimum parallax than virtual mould Maximum disparity of the type in the overlay area of left and right figure is big, and using needing to carry out median smoothing to parallax information before parallax. Dummy model is separately added into left figure and right figure, if minimum parallax of the dummy model in the figure of left and right is d, d needs to be more than void The maximum disparity of analog model overlay area.The corresponding left view of dummy model is added in left figure, dummy model is corresponding Right view is added in right figure, it is possible to generation fusion scene.
In one of embodiment of the invention, the left figure that module will be superimposed with dummy model left view is presented, and it is folded Added with dummy model right view right figure synthesized after send into display together, respectively in the left-half of display and right half Part is shown, you can the fusion scene is presented, so, user is watched by right and left eyes respectively, now can just experience true Scene is merged with the good of dummy model.
In embodiments of the present invention, except realizing real scene information and virtual scene information fusion, generation fusion scene Outside, the real scene information that can also be gathered according to the 3D video cameras dual camera, generates augmented reality scene, or, root According to the virtual reality scenario information, virtual reality scenario is generated, in embodiments of the present invention, generation augmented reality scene or void Intend reality scene, i.e. AR functions or VR functions, those skilled in the art combine the embodiment of the present invention, it is possible to achieve, herein no longer Repeat.
The device that a kind of virtual reality is merged with real scene is provided in above-described embodiment, including:Acquisition module, is used for The image information inside virtual reality device is obtained, virtual reality scenario is generated;Acquisition module, for obtaining 3D camera acquisitions Real goal scene information;Fusion Module, for according to the real goal scene information and virtual reality scenario, in virtual Generation fusion scene inside real world devices.Real scene can be combined during virtual reality to realize, is realized virtually with showing The effect of real fusion, can promote man-machine interaction, lifting Consumer's Experience.
Embodiment described above only expresses the several embodiments of the present invention, and it describes more specific and detailed, but simultaneously Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (5)

1. the device that a kind of virtual reality is merged with real scene, it is characterised in that including:
Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Acquisition module, the real goal scene information for obtaining 3D camera acquisitions;
Fusion Module, it is raw inside virtual reality device for according to the real goal scene information and virtual reality scenario Into fusion scene.
2. device according to claim 1, it is characterised in that the acquisition module, specifically for:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is different Real field scape.
3. device according to claim 2, it is characterised in that the acquisition module, including:
Reading unit, for being read out to the image inside virtual reality device;
Analytic unit, for carrying out the characteristic point that data analysis obtains image to the image read;
Comparing unit, carries out contrast for image in the image characteristic point and database by acquisition and is identified result;
Generation unit, for generating different virtual reality scenarios using the recognition result.
4. device according to claim 1, it is characterised in that the acquisition module, including:
Tracing unit, the sight for following the trail of human eye changes;
Adjustment unit, for changing according to the sight of the human eye, adjusts the 3D camera directions, so that the 3D video cameras Direction and the human eye sight change after direction of visual lines it is consistent;
Collecting unit, for obtaining the real goal scene information that the 3D video cameras are gathered in real time according to the direction after adjustment.
5. device according to claim 4, it is characterised in that the Fusion Module, including:
Initial velocity given unit, for assigning an initial velocity vector formation image motion to each pixel in image ;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, for judging whether there is moving object in image, if not having moving object in image, light stream vector is whole Individual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background, The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, the position new for the image characteristic point according to acquisition and home position, the physics ginseng based on 3D cameras Number calculates the translation, rotation and scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning obtained translation, rotation and scaled vectors by virtual reality scenario Merged with real goal scene.
CN201710242301.9A 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene Pending CN107016730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710242301.9A CN107016730A (en) 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710242301.9A CN107016730A (en) 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene

Publications (1)

Publication Number Publication Date
CN107016730A true CN107016730A (en) 2017-08-04

Family

ID=59445446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710242301.9A Pending CN107016730A (en) 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene

Country Status (1)

Country Link
CN (1) CN107016730A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320333A (en) * 2017-12-29 2018-07-24 中国银联股份有限公司 The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality
US10777012B2 (en) 2018-09-27 2020-09-15 Universal City Studios Llc Display systems in an entertainment environment
CN111783187A (en) * 2019-04-03 2020-10-16 中山市京灯网络科技有限公司 Brightening sharing platform application system
CN113298955A (en) * 2021-05-25 2021-08-24 厦门华厦学院 Real scene and virtual reality scene fusion method and system and flight simulator
CN114078102A (en) * 2020-08-11 2022-02-22 北京芯海视界三维科技有限公司 Image processing apparatus and virtual reality device
CN113298955B (en) * 2021-05-25 2024-04-30 厦门华厦学院 Real scene and virtual reality scene fusion method, system and flight simulator

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130005899A (en) * 2011-07-07 2013-01-16 박태훈 Fourth dimension virtual reality system
CN104156998A (en) * 2014-08-08 2014-11-19 深圳中科呼图信息技术有限公司 Implementation method and system based on fusion of virtual image contents and real scene
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130005899A (en) * 2011-07-07 2013-01-16 박태훈 Fourth dimension virtual reality system
CN104156998A (en) * 2014-08-08 2014-11-19 深圳中科呼图信息技术有限公司 Implementation method and system based on fusion of virtual image contents and real scene
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320333A (en) * 2017-12-29 2018-07-24 中国银联股份有限公司 The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality
CN108320333B (en) * 2017-12-29 2022-01-11 中国银联股份有限公司 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method
US10777012B2 (en) 2018-09-27 2020-09-15 Universal City Studios Llc Display systems in an entertainment environment
CN111783187A (en) * 2019-04-03 2020-10-16 中山市京灯网络科技有限公司 Brightening sharing platform application system
CN111783187B (en) * 2019-04-03 2023-12-22 京灯(广东)信息科技有限公司 Brightening sharing platform application system
CN114078102A (en) * 2020-08-11 2022-02-22 北京芯海视界三维科技有限公司 Image processing apparatus and virtual reality device
CN113298955A (en) * 2021-05-25 2021-08-24 厦门华厦学院 Real scene and virtual reality scene fusion method and system and flight simulator
CN113298955B (en) * 2021-05-25 2024-04-30 厦门华厦学院 Real scene and virtual reality scene fusion method, system and flight simulator

Similar Documents

Publication Publication Date Title
CN106896925A (en) The device that a kind of virtual reality is merged with real scene
CN106997618A (en) A kind of method that virtual reality is merged with real scene
US11632533B2 (en) System and method for generating combined embedded multi-view interactive digital media representations
JP6644833B2 (en) System and method for rendering augmented reality content with albedo model
US20230324684A1 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
US11755956B2 (en) Method, storage medium and apparatus for converting 2D picture set to 3D model
CN109615703B (en) Augmented reality image display method, device and equipment
CN102959616B (en) Interactive reality augmentation for natural interaction
JP4473754B2 (en) Virtual fitting device
CN105391970B (en) The method and system of at least one image captured by the scene camera of vehicle is provided
JP4966431B2 (en) Image processing device
Shen et al. Virtual mirror rendering with stationary rgb-d cameras and stored 3-d background
CN108369653A (en) Use the eyes gesture recognition of eye feature
CN109643373A (en) Estimate the posture in 3d space
CN110363133B (en) Method, device, equipment and storage medium for sight line detection and video processing
CN106971426A (en) A kind of method that virtual reality is merged with real scene
US20230419438A1 (en) Extraction of standardized images from a single-view or multi-view capture
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN102196280A (en) Method, client device and server
CN107016730A (en) The device that a kind of virtual reality is merged with real scene
CN106981100A (en) The device that a kind of virtual reality is merged with real scene
JP2013120556A (en) Object attribute estimation device and video plotting device
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
KR102118937B1 (en) Apparatus for Service of 3D Data and Driving Method Thereof, and Computer Readable Recording Medium
CN111435550A (en) Image processing method and apparatus, image device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170804

WD01 Invention patent application deemed withdrawn after publication