CN106896925A - The device that a kind of virtual reality is merged with real scene - Google Patents

The device that a kind of virtual reality is merged with real scene Download PDF

Info

Publication number
CN106896925A
CN106896925A CN201710242307.6A CN201710242307A CN106896925A CN 106896925 A CN106896925 A CN 106896925A CN 201710242307 A CN201710242307 A CN 201710242307A CN 106896925 A CN106896925 A CN 106896925A
Authority
CN
China
Prior art keywords
image
virtual reality
scene
real
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710242307.6A
Other languages
Chinese (zh)
Inventor
陈柳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710242307.6A priority Critical patent/CN106896925A/en
Publication of CN106896925A publication Critical patent/CN106896925A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The invention provides the device that a kind of virtual reality is merged with real scene, including:Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;Extraction module, for shooting real scene image to approach the shooting visual angle of user perspective, and provides the parameter of the shooting visual angle for representing the real scene image;On the basis of user perspective, the real scene image is adjusted, make the imaged viewing angle of the real scene image after adjustment consistent with user perspective;And real goal scene information is recognized and extracted from the real scene image after adjustment;Fusion Module, for according to the real goal scene information and virtual reality scenario, in virtual reality device inside generation fusion scene.To realize that real scene can be combined during virtual reality, the effect for virtually being merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.

Description

The device that a kind of virtual reality is merged with real scene
Technical field
The present invention relates to technical field of virtual reality, the dress that more particularly to a kind of virtual reality is merged with real scene Put.
Background technology
Virtual reality (VirtualReality, hereinafter referred to as VR) technology is main by comprehensively utilizing computer graphical system The interface equipment such as system and various reality and control, provides in three-dimensional environment generating on computers, can interacting and immerses sensation Technology.
Augmented reality (AugmentedReality, hereinafter referred to as AR) technology is a kind of by real world information and virtual generation The integrated new technology of boundary's information " seamless ", is that script is difficult the reality experienced in the certain hour spatial dimension of real world Body information (visual information, sound, taste, tactile etc.), by science and technology such as computers, is superimposed again after analog simulation, will be virtual Information application to real world, perceived by human sensory, so as to reach the sensory experience of exceeding reality.Real environment and Be added in real time same picture or space of virtual object exists simultaneously.Augmented reality, not only presents true The information in the world, and virtual information is shown that two kinds of information are complementary to one another, are superimposed simultaneously.In the enhancing for visualizing In reality, user utilizes Helmet Mounted Display, is synthesized together real world and computer graphic are multiple, just it can be seen that real The world is around it.
A kind of Helmet Mounted Display of the prior art, such as, be similar to the product of Oculus, and Consumer's Experience VR can be allowed to imitate Really, as the product that google glasses are similar to can allow Consumer's Experience AR effects.
Inventor has found during the embodiment of the present invention is realized:The existing VR helmets can watch virtual scene, Personage etc., but these virtual scene personages are pre-designed, or rendered according to special algorithm, not There is the scene when VR helmets are used with reference to user, lack the interaction with actual environment.And existing AR glasses are it can be seen that user True environment at the moment, and image can be analyzed, some prompt messages are provided, but can not experience what virtual scene true to nature brought Pleasure, namely AR is difficult to carry out the combination of virtual reality.
The content of the invention
Based on this, it is necessary to provide the device that a kind of virtual reality is merged with real scene, to realize in virtual reality mistake Real scene can be combined in journey, the effect for virtually being merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.
The device that a kind of virtual reality is merged with real scene, including:
Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Extraction module, represents that this is true for shooting real scene image to approach the shooting visual angle of user perspective, and providing The parameter of the shooting visual angle of scene image;On the basis of user perspective, the real scene image is adjusted, made true after adjustment The imaged viewing angle of scene image is consistent with user perspective;And recognized from the real scene image after adjustment and extracted and be true Real target scene information;
Fusion Module, it is raw in virtual reality device inside for according to the real goal scene information and virtual reality scenario Into fusion scene.
Wherein in one embodiment, the acquisition module, specifically for:
Image inside virtual reality device is read out, analyzes, is recognized, and virtually showed using recognition result generation is different Real field scape.
Wherein in one embodiment, the acquisition module, including:
Reading unit, for being read out to the image inside virtual reality device;
Analytic unit, the characteristic point of image is obtained for carrying out data analysis to the image for reading;
Comparing unit, result is identified for image in the image characteristic point of acquisition and database to be carried out into contrast;
Generation unit, for generating different virtual reality scenarios using the recognition result.
Wherein in one embodiment, the Fusion Module, including:
Initial velocity given unit, image motion is formed for assigning an initial velocity vector to each pixel in image ;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, whether for judging there is moving object in image, if not having moving object in image, light stream vector is whole Individual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background, The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, for according to the image characteristic point for obtaining new position and home position, the physics ginseng based on 3D cameras Number calculates translation, rotation and the scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning the translation for obtaining, rotation and scaled vectors by virtual reality scenario Merged with real goal scene.
The device that a kind of virtual reality is merged with real scene is provided in above-described embodiment, including:Acquisition module, is used for The image information inside virtual reality device is obtained, virtual reality scenario is generated;Extraction module, for approaching user perspective Shooting visual angle shoots real scene image, and provides the parameter of the shooting visual angle for representing the real scene image;Regarded with user On the basis of angle, the real scene image is adjusted, make the imaged viewing angle of the real scene image after adjustment and user perspective phase one Cause;And real goal scene information is recognized and extracted from the real scene image after adjustment;Fusion Module, for basis The real goal scene information and virtual reality scenario, in virtual reality device inside generation fusion scene.To realize in void Intend that in real process real scene can be combined, realize the effect for virtually being merged with reality, man-machine interaction, lifting can be promoted and used Experience at family.
Brief description of the drawings
The apparatus function module diagram that Fig. 1 is merged for a kind of virtual reality in one embodiment with real scene;
Fig. 2 is the high-level schematic functional block diagram of acquisition module in Fig. 1.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
The description of specific distinct unless the context otherwise, element and component in the present invention, quantity both can be with single shape Formula is present, it is also possible to which multiple forms is present, and the present invention is defined not to this.Although the step in the present invention is entered with label Arrangement is gone, but has been not used to limit the precedence of step, unless expressly stated the order of step or holding for certain step Row is needed based on other steps, and the relative rank of otherwise step is adjustable.It is appreciated that used herein Term "and/or" is related to and covers one of associated Listed Items or one or more of any and all possible group Close.
It should be noted that real scene information is included by the ambient condition information of 3D video camera captured in real-time, such as it is, left Right two cameras respectively according to user's right and left eyes direction of visual lines captured in real-time real scene image sequence, at a time T, an image is obtained in the image sequence that can be provided from left camera, as left figure, from the image sequence that right camera is provided An image is obtained in row, as right figure, wherein, left figure simulates the content that user's left eye is seen, it is right that right figure simulates user The content for arriving soon.Virtual reality scenario information includes the image information of virtual reality model, such as, virtual reality scenario model Left view and right view.
In embodiments of the present invention, augmented reality scene refers to be in by real scene information using augmented reality Existing scene, virtual reality scenario refers to virtual reality scenario information is presented scene using virtual reality technology.
In embodiments of the present invention, virtual reality device can be Intelligent worn device, and Intelligent worn device can be wrapped Include the head-wearing type intelligent equipment for possessing AR and VR functions, such as, and intelligent glasses or the helmet.
In one embodiment, as shown in figure 1, a kind of device that is merged with real scene of virtual reality, including:
Acquisition module 10, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Extraction module 20, represents that this is true for shooting real scene image to approach the shooting visual angle of user perspective, and providing The parameter of the shooting visual angle of real scene image;On the basis of user perspective, the real scene image is adjusted, made true after adjustment The imaged viewing angle of real scene image is consistent with user perspective;And recognize and extract from the real scene image after adjustment Real goal scene information;
Fusion Module 30, for according to the real goal scene information and virtual reality scenario, inside virtual reality device Generation fusion scene.
Wherein in one embodiment, the acquisition module, specifically for:
Image inside virtual reality device is read out, analyzes, is recognized, and virtually showed using recognition result generation is different Real field scape.
Wherein in one embodiment, as shown in Fig. 2 the acquisition module 10, including:
Reading unit 101, for being read out to the image inside virtual reality device;
Analytic unit 102, the characteristic point of image is obtained for carrying out data analysis to the image for reading;
Comparing unit 103, result is identified for image in the image characteristic point of acquisition and database to be carried out into contrast;
Generation unit 104, for generating different virtual reality scenarios using the recognition result.
Specifically, after by system start-up initialisation, system is by access in reading unit reading virtual reality device Specify image;The image file accessed in virtual reality device is user by the photo of photography or by other The picture that obtains of approach, by these photos and picture storage in the image data base in virtual reality device, supply is follow-up Need to select the source of various images.
Analytic unit, first can be unified the resolution ratio of image file, by its resolution compression to relatively low, for example, be divided Resolution 320*240 sizes, are needed to image file to carry out format conversion after by resolution adjustment, and the color format of image is turned Grayscale format is turned to, will be converted and had on the point or image border curve of the imagery exploitation two dimensional image brightness change distance after form There is a feature of the point analysis image angle point of curvature maximum, and using the image Corner Feature analyzed as image characteristic point.
Comparing unit, it is possible to use local random binary feature, calculate respectively above-mentioned middle acquisition characteristic point information and The characterization information of image in database, is judging that they are right in two images by the description information of each angle point Should be related to that the exterior point of erroneous matching in two pictures of removal retains the interior point of correct matching, when feature is matched somebody with somebody in the correct north for retaining The quantity of point has exceeded the threshold value of setting, then be judged as that identification is successfully entered next step;It is again new if identification is unsuccessful Picture is circulated and is processed untill recognizing successfully.
Generation unit, the target designation that the result recognized using comparing unit is identified, according to numbering in database In retrieve corresponding virtual content, and generate virtual reality scenario.
Wherein in one embodiment, the extraction module can be specifically included:Tracing unit, tracing unit, and collection Unit, the sight line for following the trail of human eye by tracing unit changes, and adjustment unit changes according to the sight line of the human eye, adjusts the 3D Video camera dual camera direction, so that the direction of visual lines after the direction of the dual camera changes with the human eye sight is consistent, Collecting unit obtains real scene information of the dual camera according to the direction Real-time Collection after adjustment.In order to realize double shootings Connector analog human eye shoots real scene information, it is necessary to camera is according to human eye sight direction, collection real scene information.In order to obtain The sight line change of human eye is taken, eye Eye-controlling focus module can be installed in VR inner helmets, to follow the trail of sight line change.In order to allow two Individual camera can scene that preferably simulated dual is arrived soon, the processor of Intelligent worn device such as VR inner helmets needs root Adjust two viewing angles of camera in left and right respectively according to eyes sight line running parameter.The real-time acquisition of dual camera picture is simultaneously Right and left eyes are presented respectively to, can now reappear the viewing effect of human eye.Specifically, it is possible to use eyeball of the prior art with Track technology, such as changing features according to eyeball and eyeball periphery are tracked, are tracked according to iris angle change, active The light beams such as projection infrared ray extract sight line change that feature is tracked to determine human eye etc. to iris.Certainly, the present invention is real A not limited to this is applied, under technology design of the invention, those skilled in the art can utilize any feasible technology tracker The sight line change of eye and then the collection direction of the right and left eyes camera of adjustment simulation human eye, Real-time Collection real scene information.
Wherein in one embodiment, the Fusion Module, including:
Initial velocity given unit, image motion is formed for assigning an initial velocity vector to each pixel in image ;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, whether for judging there is moving object in image, if not having moving object in image, light stream vector is whole Individual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background, The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, for according to the image characteristic point for obtaining new position and home position, the physics ginseng based on 3D cameras Number calculates translation, rotation and the scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning the translation for obtaining, rotation and scaled vectors by virtual reality scenario Merged with real goal scene.
Specifically, initial velocity given unit assigns an initial velocity vector to each pixel in image, makes It forms scene image sports ground, in the particular moment of operation, the point on its image is corresponded with the point on three-dimensional body, This corresponding relation can be obtained by projection relation, and dynamic analytic unit reads vector characteristic according to each pixel, to figure As entering Mobile state analysis, whether judging unit judges there is the object of motion in image, if there is no object in image in motion, Light stream vector is consecutive variations in whole image region;If there is the object of motion in image, target and image background are deposited In relative motion, the velocity that moving object is formed is inevitable different with neighborhood background vector, so as to detect moving object And position, the new position of picture position acquiring unit acquisition scene image characteristic point.
By still image change into virtual content and dynamic real scene get ready after, in picture pick-up device space The virtual content of above-mentioned identification is placed in the characteristic point locus of tracking, virtual content is merged with real scene;Meter Unit is calculated according to the scene image characteristic point for obtaining new position and home position, is calculated according to the physical parameter of camera Virtual content is assigned and calculated in three-dimensional by the translation of object, rotation and scaled vectors in three-dimensional image space, integrated unit In space in the translation of object, rotation and scaled vectors, the complete fusion of virtual content and real scene is achieved that.
In the present embodiment, input source can be used as by using single picture, recognizes the picture so as to excite virtual content; Scene characteristic tracer technique is utilized simultaneously, virtual content is placed in the true environment of user, so as to realize the effect of augmented reality Really, the limitation that characteristic image excites virtual content is relieved, the development of industry is promoted.
In another embodiment of the present invention, the Fusion Module, may particularly include:
First superpositing unit, for the left figure that the left camera shoots to be superimposed with the left view of virtual scene, synthesis fusion Scene left figure;
Second superpositing unit, for the right figure that the right camera shoots to be superimposed with the right view of virtual scene, synthesis fusion Scene right figure;
Integrated unit, according to fusion scene left figure and the right figure, generation fusion scene.
Specifically, by by virtual scene information and real scene information superposition, such as, by dummy model information superposition The real-time image sequence of real scene is provided to during real scene, it is necessary to left and right two camera, at a time t, Ke Yicong An image is obtained in the image sequence that left camera is provided, as left figure, is obtained in the image sequence provided from right camera One image, as right figure.Left figure simulates the content that left eye is seen, right figure simulates the content that right eye is seen.Left and right shooting Head provides real-time image sequence, and these image sequences can be obtained by various methods, and a kind of method is to use camera factory The SDK (SoftwareDevelopmentKit) that business provides carries out image acquisition, and another method conventional is opened using some Source instrument reads image, such as Opencv from camera.In order to obtain the hierarchical relationship of real scene, after parallax being calculated, The hierarchical relationship of scene is represented with the hierarchical relationship of parallax.Calculate the parallax between the figure of left and right, it is possible to use BM, figure cut, Any one parallax calculation method such as ADCensus is calculated.There is parallax just to know scene hierarchical information, the layer of scene Secondary information is also referred to as the depth of view information of scene, and depth of view information can be used to instruct merging for dummy model and real scene, allows void Analog model is more rationally put into real scene.Specific method is, dummy model left and right figure minimum parallax than virtual mould Maximum disparity of the type in the overlay area of left and right figure is big, and using needing to carry out median smoothing to parallax information before parallax. Dummy model is separately added into left figure and right figure, if minimum parallax of the dummy model in the figure of left and right is d, d is needed more than void The maximum disparity of analog model overlay area.The corresponding left view of dummy model is added in left figure, dummy model is corresponding Right view is added in right figure, it is possible to generation fusion scene.
In one of embodiment of the invention, the left figure that module will be superimposed with dummy model left view is presented, and it is folded Added with dummy model right view right figure synthesized after send into display together, respectively in the left-half of display and right half Part is shown, you can the fusion scene is presented, and so, user is watched by right and left eyes respectively, now can just experience true Scene is merged with the good of dummy model.
In embodiments of the present invention, except realizing real scene information and virtual scene information fusion, generation fusion scene Outward, the real scene information that can also be gathered according to the 3D video cameras dual camera, generates augmented reality scene, or, root According to the virtual reality scenario information, virtual reality scenario is generated, in embodiments of the present invention, generate augmented reality scene or void Intend reality scene, i.e. AR functions or VR functions, those skilled in the art combine the embodiment of the present invention, it is possible to achieve, herein no longer Repeat.
The device that a kind of virtual reality is merged with real scene is provided in above-described embodiment, including:Acquisition module, is used for The image information inside virtual reality device is obtained, virtual reality scenario is generated;Extraction module, for approaching user perspective Shooting visual angle shoots real scene image, and provides the parameter of the shooting visual angle for representing the real scene image;Regarded with user On the basis of angle, the real scene image is adjusted, make the imaged viewing angle of the real scene image after adjustment and user perspective phase one Cause;And real goal scene information is recognized and extracted from the real scene image after adjustment;Fusion Module, for basis The real goal scene information and virtual reality scenario, in virtual reality device inside generation fusion scene.To realize in void Intend that in real process real scene can be combined, realize the effect for virtually being merged with reality, man-machine interaction, lifting can be promoted and used Experience at family.
Embodiment described above only expresses several embodiments of the invention, and its description is more specific and detailed, but simultaneously Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention Shield scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (4)

1. the device that a kind of virtual reality is merged with real scene, it is characterised in that including:
Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Extraction module, represents that this is true for shooting real scene image to approach the shooting visual angle of user perspective, and providing The parameter of the shooting visual angle of scene image;On the basis of user perspective, the real scene image is adjusted, made true after adjustment The imaged viewing angle of scene image is consistent with user perspective;And recognized from the real scene image after adjustment and extracted and be true Real target scene information;
Fusion Module, it is raw in virtual reality device inside for according to the real goal scene information and virtual reality scenario Into fusion scene.
2. device according to claim 1, it is characterised in that the acquisition module, specifically for:
Image inside virtual reality device is read out, analyzes, is recognized, and virtually showed using recognition result generation is different Real field scape.
3. device according to claim 2, it is characterised in that the acquisition module, including:
Reading unit, for being read out to the image inside virtual reality device;
Analytic unit, the characteristic point of image is obtained for carrying out data analysis to the image for reading;
Comparing unit, result is identified for image in the image characteristic point of acquisition and database to be carried out into contrast;
Generation unit, for generating different virtual reality scenarios using the recognition result.
4. device according to claim 3, it is characterised in that the Fusion Module, including:
Initial velocity given unit, image motion is formed for assigning an initial velocity vector to each pixel in image ;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, whether for judging there is moving object in image, if not having moving object in image, light stream vector is whole Individual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background, The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, for according to the image characteristic point for obtaining new position and home position, the physics ginseng based on 3D cameras Number calculates translation, rotation and the scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning the translation for obtaining, rotation and scaled vectors by virtual reality scenario Merged with real goal scene.
CN201710242307.6A 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene Pending CN106896925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710242307.6A CN106896925A (en) 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710242307.6A CN106896925A (en) 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene

Publications (1)

Publication Number Publication Date
CN106896925A true CN106896925A (en) 2017-06-27

Family

ID=59196570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710242307.6A Pending CN106896925A (en) 2017-04-14 2017-04-14 The device that a kind of virtual reality is merged with real scene

Country Status (1)

Country Link
CN (1) CN106896925A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107635131A (en) * 2017-09-01 2018-01-26 北京雷石天地电子技术有限公司 A kind of realization method and system of virtual reality
CN107976811A (en) * 2017-12-25 2018-05-01 河南新汉普影视技术有限公司 A kind of simulation laboratory and its emulation mode based on virtual reality mixing
CN108144292A (en) * 2018-01-30 2018-06-12 河南三阳光电有限公司 Bore hole 3D interactive game making apparatus
CN108596964A (en) * 2018-05-02 2018-09-28 厦门美图之家科技有限公司 Depth data acquisition methods, device and readable storage medium storing program for executing
CN108597030A (en) * 2018-04-23 2018-09-28 新华网股份有限公司 Effect of shadow display methods, device and the electronic equipment of augmented reality AR
CN108805989A (en) * 2018-06-28 2018-11-13 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that scene is passed through
CN108989681A (en) * 2018-08-03 2018-12-11 北京微播视界科技有限公司 Panorama image generation method and device
CN109714589A (en) * 2019-02-22 2019-05-03 上海北冕信息科技有限公司 Input/output unit, equipment for augmented reality
CN109947242A (en) * 2019-02-26 2019-06-28 贵州翰凯斯智能技术有限公司 A kind of factory's virtual application system and application method based on information fusion
CN110197532A (en) * 2019-06-05 2019-09-03 北京悉见科技有限公司 System, method, apparatus and the computer storage medium of augmented reality meeting-place arrangement
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking
CN111158463A (en) * 2019-11-29 2020-05-15 淮北幻境智能科技有限公司 SLAM-based computer vision large space positioning method and system
CN111316334A (en) * 2017-11-03 2020-06-19 三星电子株式会社 Apparatus and method for dynamically changing virtual reality environment
CN111583420A (en) * 2020-05-27 2020-08-25 上海乂学教育科技有限公司 Intelligent learning system and method based on augmented reality mode
CN111688488A (en) * 2019-03-12 2020-09-22 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle machine equipment and virtual scene control method thereof
CN112789020A (en) * 2019-02-13 2021-05-11 苏州金瑞麒智能科技有限公司 Visualization method and system for intelligent wheelchair
CN113298955A (en) * 2021-05-25 2021-08-24 厦门华厦学院 Real scene and virtual reality scene fusion method and system and flight simulator
CN115379134A (en) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 Image acquisition device, method, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130005899A (en) * 2011-07-07 2013-01-16 박태훈 Fourth dimension virtual reality system
CN104156998A (en) * 2014-08-08 2014-11-19 深圳中科呼图信息技术有限公司 Implementation method and system based on fusion of virtual image contents and real scene
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN106484085A (en) * 2015-08-31 2017-03-08 北京三星通信技术研究有限公司 Method and its head mounted display of real-world object is shown in head mounted display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130005899A (en) * 2011-07-07 2013-01-16 박태훈 Fourth dimension virtual reality system
CN104156998A (en) * 2014-08-08 2014-11-19 深圳中科呼图信息技术有限公司 Implementation method and system based on fusion of virtual image contents and real scene
CN106484085A (en) * 2015-08-31 2017-03-08 北京三星通信技术研究有限公司 Method and its head mounted display of real-world object is shown in head mounted display
CN105955456A (en) * 2016-04-15 2016-09-21 深圳超多维光电子有限公司 Virtual reality and augmented reality fusion method, device and intelligent wearable equipment
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107635131A (en) * 2017-09-01 2018-01-26 北京雷石天地电子技术有限公司 A kind of realization method and system of virtual reality
CN111316334A (en) * 2017-11-03 2020-06-19 三星电子株式会社 Apparatus and method for dynamically changing virtual reality environment
CN111316334B (en) * 2017-11-03 2024-03-08 三星电子株式会社 Apparatus and method for dynamically changing virtual reality environment
CN107976811A (en) * 2017-12-25 2018-05-01 河南新汉普影视技术有限公司 A kind of simulation laboratory and its emulation mode based on virtual reality mixing
CN107976811B (en) * 2017-12-25 2023-12-29 河南诺控信息技术有限公司 Virtual reality mixing-based method simulation laboratory simulation method of simulation method
CN108144292A (en) * 2018-01-30 2018-06-12 河南三阳光电有限公司 Bore hole 3D interactive game making apparatus
CN108597030A (en) * 2018-04-23 2018-09-28 新华网股份有限公司 Effect of shadow display methods, device and the electronic equipment of augmented reality AR
CN108596964A (en) * 2018-05-02 2018-09-28 厦门美图之家科技有限公司 Depth data acquisition methods, device and readable storage medium storing program for executing
CN108805989A (en) * 2018-06-28 2018-11-13 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that scene is passed through
CN108989681A (en) * 2018-08-03 2018-12-11 北京微播视界科技有限公司 Panorama image generation method and device
CN112789020A (en) * 2019-02-13 2021-05-11 苏州金瑞麒智能科技有限公司 Visualization method and system for intelligent wheelchair
CN109714589A (en) * 2019-02-22 2019-05-03 上海北冕信息科技有限公司 Input/output unit, equipment for augmented reality
CN109947242A (en) * 2019-02-26 2019-06-28 贵州翰凯斯智能技术有限公司 A kind of factory's virtual application system and application method based on information fusion
CN111688488A (en) * 2019-03-12 2020-09-22 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle machine equipment and virtual scene control method thereof
CN110197532A (en) * 2019-06-05 2019-09-03 北京悉见科技有限公司 System, method, apparatus and the computer storage medium of augmented reality meeting-place arrangement
CN110703904B (en) * 2019-08-26 2023-05-19 合肥疆程技术有限公司 Visual line tracking-based augmented virtual reality projection method and system
CN110703904A (en) * 2019-08-26 2020-01-17 深圳疆程技术有限公司 Augmented virtual reality projection method and system based on sight tracking
CN111158463A (en) * 2019-11-29 2020-05-15 淮北幻境智能科技有限公司 SLAM-based computer vision large space positioning method and system
CN111583420A (en) * 2020-05-27 2020-08-25 上海乂学教育科技有限公司 Intelligent learning system and method based on augmented reality mode
CN111583420B (en) * 2020-05-27 2021-11-12 上海松鼠课堂人工智能科技有限公司 Intelligent learning system and method based on augmented reality mode
CN113298955A (en) * 2021-05-25 2021-08-24 厦门华厦学院 Real scene and virtual reality scene fusion method and system and flight simulator
CN113298955B (en) * 2021-05-25 2024-04-30 厦门华厦学院 Real scene and virtual reality scene fusion method, system and flight simulator
CN115379134B (en) * 2022-10-26 2023-02-03 四川中绳矩阵技术发展有限公司 Image acquisition device, method, equipment and medium
CN115379134A (en) * 2022-10-26 2022-11-22 四川中绳矩阵技术发展有限公司 Image acquisition device, method, equipment and medium

Similar Documents

Publication Publication Date Title
CN106896925A (en) The device that a kind of virtual reality is merged with real scene
CN106997618A (en) A kind of method that virtual reality is merged with real scene
US10855936B2 (en) Skeleton-based effects and background replacement
US11632533B2 (en) System and method for generating combined embedded multi-view interactive digital media representations
US11776222B2 (en) Method for detecting objects and localizing a mobile computing device within an augmented reality experience
JP6644833B2 (en) System and method for rendering augmented reality content with albedo model
US9855496B2 (en) Stereo video for gaming
CN102959616B (en) Interactive reality augmentation for natural interaction
CN105391970B (en) The method and system of at least one image captured by the scene camera of vehicle is provided
US20210279971A1 (en) Method, storage medium and apparatus for converting 2d picture set to 3d model
EP1072018B1 (en) Wavelet-based facial motion capture for avatar animation
US8781162B2 (en) Method and system for head tracking and pose estimation
Shen et al. Virtual mirror rendering with stationary rgb-d cameras and stored 3-d background
CN107016730A (en) The device that a kind of virtual reality is merged with real scene
CN109643373A (en) Estimate the posture in 3d space
CN106875431B (en) Image tracking method with movement prediction and augmented reality implementation method
US20230419438A1 (en) Extraction of standardized images from a single-view or multi-view capture
CN106971426A (en) A kind of method that virtual reality is merged with real scene
CN105407346A (en) Method For Image Segmentation
CN106981100A (en) The device that a kind of virtual reality is merged with real scene
CN113689503B (en) Target object posture detection method, device, equipment and storage medium
WO2014006786A1 (en) Characteristic value extraction device and characteristic value extraction method
CN115496863A (en) Short video generation method and system for scene interaction of movie and television intelligent creation
Amar et al. Synthesizing reality for realistic physical behavior of virtual objects in augmented reality applications for smart-phones
CN107016729A (en) A kind of method that virtual reality is merged with real scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170627