CN104392045B - A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal - Google Patents
A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal Download PDFInfo
- Publication number
- CN104392045B CN104392045B CN201410688094.6A CN201410688094A CN104392045B CN 104392045 B CN104392045 B CN 104392045B CN 201410688094 A CN201410688094 A CN 201410688094A CN 104392045 B CN104392045 B CN 104392045B
- Authority
- CN
- China
- Prior art keywords
- information
- rgb
- dimensional
- virtual reality
- objective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000002708 enhancing effect Effects 0.000 title abstract description 5
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000003993 interaction Effects 0.000 claims abstract description 8
- 230000003190 augmentative effect Effects 0.000 claims description 35
- 238000001914 filtration Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 8
- 238000007476 Maximum Likelihood Methods 0.000 claims description 3
- 230000006855 networking Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 abstract description 2
- 238000010276 construction Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of real time enhancing virtual reality system and method based on intelligent mobile terminal, including:Processor, depth information capture card, RGB information capture card, memory cell and display screen, the three-dimensional depth information of objective environment is obtained using depth information capture card;Objective environment essential information is obtained using RGB information capture card;Above-mentioned three-dimensional depth information and objective environment essential information input processor are handled, create the threedimensional model of objective world;The three-dimensional modeling data that the result data of processing is objective world is stored in the memory unit;Using interface circuit strengthen the display of virtual reality, realize man-machine interaction.Present system is reasonable in design, it is simple in construction, real-time is high, required equipment is few, and scene information is lost less, workable, have a wide range of application, cost is more low, can make user really realize enhancing virtual reality to people with infinite glamour, the threedimensional model of structure and actual objective environment model error be small.
Description
Technical Field
The invention relates to a virtual reality system generation technology, in particular to a real-time augmented virtual reality system and method based on an intelligent mobile terminal.
Background
Augmented Reality (AR) technology applies virtual information to the real world through computer technology, and real environments and virtual objects are superimposed on the same picture or space in real time and exist at the same time. Augmented reality provides a virtual environment that generates realistic visual, auditory, force, touch and motion sensations, unlike information that humans can perceive. The method not only shows the information of the real world, but also displays the virtual information at the same time, and the two kinds of information are mutually supplemented and superposed. Augmented reality generates virtual objects which do not exist in a real environment by means of a computer graphics technology and a visualization technology, organically integrates the virtual objects into a real environment through a sensing technology, and integrates the virtual objects and the real environment into a whole by means of display equipment, so that direct natural interaction between a user and the environment is realized. The computer is a brand new man-machine interaction technology, can simulate real site landscape by utilizing the technology, and is a high-level computer man-machine interface with interactivity and conception as basic characteristics. The user can not only feel the reality of 'being personally on the scene' experienced in the objective physical world through the virtual reality system, but also can break through space, time and other objective limitations and feel the experience which cannot be experienced in the real world in person. Thus, the AR has three basic elements, namely real and virtual integration, real-time interaction and three-dimensional localization.
To achieve the virtual-real combination of AR, the user needs to look through some means. The current popular technology mainly comprises two types, namely a transparent device and an opaque device, wherein the transparent device is used for projecting a virtual image onto the device, such as 3D glasses, 3D projection devices and the like, so that a user can feel the scene of the augmented virtual reality, but a plurality of special devices are required to be matched for use, the using places, environments and the like are limited, and the requirements of people for the augmented virtual reality at any time and any place can not be met gradually; the latter is an image combined with virtual reality processed by a computer processor, such as an intelligent mobile terminal like a smart phone and a tablet computer. Since the intelligent mobile terminal has complex computing capability, video recording and image display, and multiple functions of a GPS, network connection, touch control, inclination detection, and the like, the price is gradually reduced, and thus, research on AR using the intelligent mobile terminal as a platform is increasing.
The current mainstream method of the augmented virtual reality technology based on the opaque device is to collect objective environment information through an image video capture card, mainly collect the same scene through monocular, binocular and binocular cameras, extract features of the same scene collected by different cameras by using an image processing technology, perform scene matching by using algorithms such as feature matching and the like to obtain three-dimensional information of the objective environment, perform three-dimensional reconstruction to obtain a three-dimensional model of the objective environment, and finally realize augmented virtual reality. The disadvantages of such a method are: the method has the advantages that the calculation amount is large, a large amount of cloud data processing and calculation needs to be carried out, the capacity requirement of a processor for obtaining three-dimensional information is very high, real-time three-dimensional modeling can hardly be carried out, and a lot of information which is useful in objective environments needs to be lost in scene feature extraction and three-dimensional reconstruction processes.
In order to establish a real-time three-dimensional model, the invention adopts an RGB + D model to establish the three-dimensional model, directly utilizes a Depth Information acquisition card to acquire three-dimensional Depth Information (Depth Information) of an objective environment, and utilizes a CCD video image acquisition card to acquire basic Information (such as color (RGB, Red, Green and Blue), texture Information, gray Information, intensity Information and the like) of the objective environment, so that the calculation of the three-dimensional Depth Information in the objective environment Information is released from a processor, and the real-time property of enhancing virtual reality is greatly improved.
In an augmented reality environment, a user can see the real surrounding environment and simultaneously see the augmented information generated by the computer. The augmented reality has a bridge erected in gullies between the virtual reality and the real world, so that the augmented reality has a great application potential, and can be widely applied to numerous fields of intelligent robot navigation and obstacle avoidance, military, model driving, three-dimensional navigation, medicine, manufacturing and maintenance, games, entertainment and the like.
Disclosure of Invention
Aiming at the defects that the capacity requirements of the acquisition and calculation of three-dimensional information on a processor are very high, real-time three-dimensional modeling can not be performed almost, and a lot of useful information in objective environments can be lost in scene feature extraction and three-dimensional reconstruction processes in the prior art, the invention provides a real-time augmented virtual reality system and a method based on an intelligent mobile terminal, which can liberate the calculation of three-dimensional depth information in objective environment information from the processor and greatly improve the real-time property of augmented virtual reality.
In order to solve the technical problems, the invention adopts the technical scheme that:
the invention relates to a real-time augmented virtual reality system based on an intelligent mobile terminal, which comprises:
the system comprises a processor, a depth information acquisition card, an RGB information acquisition card, a storage unit and a display screen, wherein the processor receives three-dimensional depth information of an objective environment and basic information of the objective environment acquired by the depth information acquisition card and the RGB information acquisition card, processes and constructs a three-dimensional model, and stores processed result data into the storage unit; the processor is connected with the display screen through the interface circuit.
The depth information acquisition card is a camera with the direct depth information computing capability, and can directly read the three-dimensional depth information of the objective environment.
The invention relates to a real-time augmented virtual reality method based on an intelligent mobile terminal, which comprises the following steps:
acquiring three-dimensional depth information of an objective environment by using a depth information acquisition card;
obtaining basic information of an objective environment by using an RGB information acquisition card;
inputting the three-dimensional depth information and the basic information of the objective environment into a processor for processing, and creating a three-dimensional model of an objective world;
storing the processed result data, namely the three-dimensional model data of the objective world, in a storage unit;
and displaying the augmented virtual reality by using the interface circuit to realize human-computer interaction.
Creating a three-dimensional model of an objective world includes the steps of:
information filtering: filtering the three-dimensional depth information and the basic information of the objective environment, namely RGB + D images, simultaneously to remove noise images;
information tracking algorithm: estimating the next image based on all the denoised RGB + D information;
three-dimensional reconstruction based on RGB + D information: estimation parameter and typical triangle splicing method pair using three-dimensional world informationAnd carrying out interpolation splicing to obtain a final objective world three-dimensional model.
The simultaneous filtering of the RGB + D image is achieved by the following formula:
(1)
wherein,filtering and denoising the RGB + D image;as picture elementsThe neighborhood of (a) is determined,is the weight coefficient of the filter;for a RGB + D image containing noise,m,nare respectively neighborhoodsThe coordinate value of each point in the table.
Using a classical Gaussian filter, i.e.
(2)
Wherein,for each pixel point coordinate of the image,at pixel point for filterUsing neighborhoodsThe set of (a) and (b),is the standard deviation of the gaussian function.
And estimating the next image of all the denoised three-dimensional world information as follows:
using maximum likelihood estimation, i.e.
(3)
In the formula,(4)
(5)
wherein,is the current RGB + D image;to estimate a new RGB + D image;lie group algebra operators for the current image and the estimated image;in the form of a picture element or a picture element,;a pose estimation model is obtained;depth information acquired by a depth information acquisition card;is composed ofStandard deviation of (d);is a norm, and defines
(6)
Wherein,as the parameter(s) is (are),sis a variable, wherein in the formula,。
three-dimensional reconstruction based on RGB + D information is performed by the following formula:
(7)
wherein,(p) A minimum unit for three-dimensional reconstruction;is a function of energy, andsatisfy the requirement ofMinimum;for the ground plane estimation of the three-dimensional reconstruction of the intelligent mobile terminal, a simple low-pass filter is adopted, namely, the ground plane estimation is carried out by utilizing RGB + D information;as a function of energyIs determined by the estimated parameters of (a) and (b),is composed ofThe gradient of (a) of (b) is,is the RGB + D information set.
The method of the invention also comprises the following steps:
if the data processing capacity of the processor of the mobile terminal is insufficient, the whole processing process is uploaded to a network server or a personal computer for processing through a mobile network or wifi, and then the processed data is downloaded to the mobile intelligent terminal through the mobile network or wifi.
The method of the invention also comprises the following steps:
if a plurality of users use the mobile intelligent terminal, the users share the AR data by using the networking function, and the plurality of users enjoy the application of the augmented virtual reality at any place and any time.
The invention has the following beneficial effects and advantages:
1. the system has the advantages of reasonable design, simple structure, high real-time performance, less required equipment, strong operability, wide application range, lower cost and the like, and can enable a user to really realize the infinite charm brought to people by the augmented virtual reality.
2. The invention adopts the RGB + D model to construct the three-dimensional model, directly utilizes the Depth Information acquisition card to acquire the three-dimensional Depth Information (Depth Information) of the objective environment, utilizes the CCD video image acquisition card to acquire the basic Information of the objective environment, and frees the calculation of the three-dimensional Depth Information in the objective environment Information from a processor, thereby having high real-time performance, less scene Information loss and small error between the constructed three-dimensional model and the actual objective environment model.
3. The invention can be widely applied to the fields of intelligent robot navigation and obstacle avoidance, military, model driving, three-dimensional navigation, medicine, manufacturing and maintenance, games, entertainment and the like.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a functional block diagram of a real-time augmented virtual reality system based on an intelligent mobile terminal;
FIG. 3 is a main flow chart of the method of the present invention;
FIG. 4 is a flow chart of a three-dimensional reconstruction algorithm in the method of the present invention.
Wherein 101 is a mobile terminal; 102 is a processor; 103 is a storage unit; 104 is an interface circuit; 105 is an RGB video image acquisition card; 106 is a depth information acquisition card; 107 is a light source; 108 is a display screen.
Detailed Description
The invention is further elucidated with reference to the accompanying drawings.
As shown in fig. 1 and 2, the real-time augmented virtual reality system based on the intelligent mobile terminal of the present invention includes: the system comprises a processor, a depth information acquisition card, an RGB information acquisition card, a storage unit and a display screen, wherein the processor receives three-dimensional depth information of an objective environment and basic information of the objective environment, which are acquired by the depth information acquisition card and the RGB information acquisition card, processes and constructs a three-dimensional model, and stores processed result data into the storage unit; the processor is connected with the display screen through the interface circuit.
In this embodiment, the mobile terminal 101 may be a smart phone, a tablet computer, a palmtop computer, etc., and the processor 102, the storage unit 103, and the interface circuit 104 employ internal elements of the smart terminal. 105 is an RGB video image acquisition card, a common CCD camera can meet the requirements, and the depth information acquisition card 106 can directly read the three-dimensional depth information of the objective environment by using a Kinect camera having the direct depth information calculation capability. The light source 107 is used for compensating the influence of insufficient natural light on the objective environment information.
The invention utilizes the processor 102 to calculate and process data, constructs an environment three-dimensional model in real time, realizes the application of augmented virtual reality, stores the processed result data in the storage unit 103, utilizes the interface circuit 104 to perform the display screen 108 of the augmented virtual reality, and realizes human-computer interaction.
As shown in fig. 3, the real-time augmented virtual reality method based on the intelligent mobile terminal of the present invention includes the following steps:
acquiring three-dimensional depth information of an objective environment by using a depth information acquisition card;
obtaining basic information of an objective environment by using an RGB information acquisition card;
inputting the three-dimensional depth information and the basic information of the objective environment into a processor for processing, and creating a three-dimensional model of an objective world;
storing the processed result data, namely the three-dimensional model data of the objective world, in a storage unit;
and displaying the augmented virtual reality by using the interface circuit to realize human-computer interaction.
In order to establish a real-time three-dimensional model, the invention adopts an RGB + D model to establish the three-dimensional model, directly utilizes a Depth Information acquisition card to acquire three-dimensional Depth Information (Depth Information) of an objective environment, and utilizes a CCD video image acquisition card to acquire basic Information (such as color (RGB, Red, Green and Blue), texture Information, gray Information, intensity Information and the like) of the objective environment, so that the calculation of the three-dimensional Depth Information in the objective environment Information is released from a processor, and the real-time property of enhancing virtual reality is improved.
The method mainly comprises the following steps:
step 1: acquiring three-dimensional world information by using a depth information acquisition card and an RGB information acquisition card in FIG. 2, and then expressing the three-dimensional world information by using RGB + D;
step 2: inputting the RGB + D information into the intelligent mobile terminal processor for processing, and processing the RGB + D information of the objective world acquired by the depth information acquisition card and the RGB information acquisition card to finally obtain a three-dimensional model of the objective world, as shown in fig. 4:
(1) filtering RGB + D information: simultaneously carrying out filtering and denoising on the intensity image and the depth image:
(1)
wherein,filtering and denoising the RGB + D image;as picture elementsThe neighborhood of (a) is determined,is the weight coefficient of the filter;for a noise-containing RGB + D image,as an imageThe set of coordinates of the neighborhood is,m,nare respectively neighborhoodsThe coordinate value of each point in the table. Here, a classical Gaussian filter is used, i.e.
(2)
Wherein,for each pixel point coordinate of the image,at pixel point for filterUsing neighborhoodsThe set of (a) and (b),is the standard deviation of the gaussian function.
(2) Tracking algorithm based on RGB + D: estimating the next image by using the depth information acquisition card and the RGB + D information of the current image acquired by the RGB information acquisition card, and adopting maximum likelihood estimation, namely
(3)
In the formula,(4)
(5)
wherein,is the current RGB + D image;to estimate a new RGB + D image;lie group algebra operators for the current image and the estimated image;in the form of a picture element or a picture element,set of information for RGB + D image, i.e.;A pose estimation model is obtained;depth information acquired by a depth information acquisition card;is composed ofStandard deviation of (d);is a norm, and defines
(6)
Wherein,as the parameter(s) is (are),sis a variable, wherein in the formula,
through the algorithm, whether the objective world video frame obtained by the acquisition card depth information acquisition card and the RGB information acquisition card in real time is a key frame or not can be estimated, if the key frame is the key frame, the RGB + D image of the frame is updated into the three-dimensional reconstruction model, if the key frame is not the key frame, the key frame is tracked again until the key frame is captured, and finally, the three-dimensional model reconstruction is carried out by using the objective world key frame captured by the depth information acquisition card and the RGB information acquisition card, and the key frame contains all objective world information, namely RGB + D information, so that the integrity of the objective world reconstruction three-dimensional model information is ensured. Therefore, the algorithm can be used for estimating the three-dimensional information of the objective world, and is characterized in that all information of the objective world is adopted, the characteristic extraction and the characteristic matching are not carried out, all information in the objective world is contained, the operation efficiency is high, the real-time performance is high, the current popular algorithm is based on image characteristics, the processes of characteristic extraction, characteristic matching and the like need to be carried out on images, and the algorithm has the defects of large operation amount, low efficiency, poor real-time performance and the loss of a great deal of objective world information in the characteristic extraction process.
(3) Three-dimensional reconstruction algorithm based on RGB + D information:
(7)
wherein,a minimum unit for three-dimensional reconstruction;is a function of energy, andsatisfy the requirement ofMinimum;for the ground plane estimation of the three-dimensional reconstruction of the intelligent mobile terminal, a simple low-pass filter is adopted, namely the ground plane estimation is estimated by utilizing RGB + D information;as a function of energyIs determined by the estimated parameters of (a) and (b),is composed ofThe gradient of (a) of (b) is,is the RGB + D information set.
Finally, a typical triangle splicing method pair is utilizedAnd carrying out interpolation splicing to obtain a final three-dimensional objective world model.
And step 3: the data obtained after the processing based on steps 1 and 2 can be stored in a storage unit.
And 4, step 4: and applying the real-time augmented virtual reality based on the intelligent mobile terminal to the display screen of the intelligent mobile terminal for displaying.
If the data processing capability of the processor of the mobile terminal is insufficient, the processing procedures of steps 1, 2 and 3 may be uploaded to a network server or a personal computer for processing through a mobile network or wifi, and then the processed data may be downloaded to the mobile intelligent terminal through the mobile network or wifi, so that the data processing capability of the system may be improved, as shown in the extended storage unit and the extended processor (such as the network server or the PC) in fig. 2.
And if the capacity of the storage unit of the mobile intelligent terminal is limited, uploading the data processing result to the extended storage unit through a mobile network or wifi, and improving the information storage capacity.
If the mobile intelligent terminals of a plurality of users are provided, the users can share AR data by using the networking function, so that the users can enjoy the application of the augmented virtual reality at any place and any time.
Claims (5)
1. A real-time augmented virtual reality method based on an intelligent mobile terminal is characterized by comprising the following steps:
acquiring three-dimensional depth information of an objective environment by using a depth information acquisition card;
obtaining basic information of an objective environment by using an RGB information acquisition card;
inputting the three-dimensional depth information and the basic information of the objective environment into a processor for processing, and creating a three-dimensional model of an objective world;
storing the processed result data, namely the three-dimensional model data of the objective world, in a storage unit;
the interface circuit is used for displaying the augmented virtual reality to realize human-computer interaction;
creating a three-dimensional model of an objective world includes the steps of:
information filtering: filtering the three-dimensional depth information and the basic information of the objective environment, namely RGB + D images, simultaneously to remove noise images;
information tracking algorithm: estimating the next image based on all the denoised RGB + D information;
three-dimensional reconstruction based on RGB + D information: carrying out interpolation splicing on the minimum unit of the three-dimensional reconstruction by utilizing the estimation parameters of the three-dimensional world information and a typical triangular splicing method to obtain a final objective world three-dimensional model;
the simultaneous filtering of the RGB + D image is achieved by the following formula:
wherein, I' (x, y) is an RGB + D image after filtering and denoising; epsilon is the neighborhood of the image pixel (x, y), and w (m, n) is the weight coefficient of the filter; i (m, n) is an RGB + D image containing noise, and m, n are coordinate values of each point in a neighborhood epsilon respectively;
three-dimensional reconstruction based on RGB + D information is performed by the following formula:
wherein u (p) is the minimum unit of three-dimensional reconstruction,in the form of a picture element or a picture element,e (u) is an energy function, and u satisfies Emin (u); pi (p) is the ground plane estimation of the three-dimensional reconstruction of the intelligent mobile terminal, and a simple low-pass filter is adopted, namely the ground plane estimation is carried out by utilizing RGB + D informationα and VπFor the estimated parameters of the energy function E (u),is a gradient of u, ΩDIs the RGB + D information set.
2. The real-time augmented virtual reality method based on intelligent mobile terminal according to claim 1,
the method is characterized in that: using a classical Gaussian filter, i.e.
Wherein, (x, y) is the coordinate of each pixel point of the image, (m, n) is the set of neighborhood epsilon adopted by the filter at the pixel point (x, y), and sigma is2Is the standard deviation of the gaussian function.
3. The real-time augmented virtual reality method based on the intelligent mobile terminal according to claim 1, characterized in that: and estimating the next image of all the denoised three-dimensional world information as follows:
using maximum likelihood estimation, i.e.
In the formula, rp(p,ξj,i)=Ii(p)-Ij(ω(p,Di(p),ξj,i)) (4)
Wherein, IiIs the current RGB + D image; i isjfor new estimation of RGB + D image ξj,iLie group algebra operators for the current image and the estimated image;in the form of a picture element or a picture element,omega is a pose estimation model; diDepth information acquired by a depth information acquisition card; viIs DiStandard deviation of (d); i | · | purple windδIs a norm, and defines
Wherein, delta is a parameter, s is a variable, in the formula,
4. the real-time augmented virtual reality method based on the intelligent mobile terminal according to claim 1, characterized in that: further comprising the steps of:
if the data processing capacity of the processor of the mobile terminal is insufficient, the whole processing process is uploaded to a network server or a personal computer for processing through a mobile network or wifi, and then the processed data is downloaded to the mobile intelligent terminal through the mobile network or wifi.
5. The real-time augmented virtual reality method based on the intelligent mobile terminal according to claim 1, characterized in that: further comprising the steps of:
if a plurality of users use the mobile intelligent terminal, the users share the AR data by using the networking function, and the plurality of users enjoy the application of the augmented virtual reality at any place and any time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410688094.6A CN104392045B (en) | 2014-11-25 | 2014-11-25 | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410688094.6A CN104392045B (en) | 2014-11-25 | 2014-11-25 | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104392045A CN104392045A (en) | 2015-03-04 |
CN104392045B true CN104392045B (en) | 2018-01-09 |
Family
ID=52609948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410688094.6A Active CN104392045B (en) | 2014-11-25 | 2014-11-25 | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104392045B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105093522B (en) * | 2015-07-08 | 2017-10-24 | 清华大学 | Binocular based on phase turns many mesh virtual view synthetic methods |
KR102511490B1 (en) * | 2015-08-18 | 2023-03-16 | 매직 립, 인코포레이티드 | Virtual and augmented reality systems and methods |
CN106095114A (en) * | 2016-06-29 | 2016-11-09 | 宁波市电力设计院有限公司 | Electric power industry based on VR technology expands engineering aid system and method for work thereof |
CN106355647A (en) * | 2016-08-25 | 2017-01-25 | 北京暴风魔镜科技有限公司 | Augmented reality system and method |
CN106371609A (en) * | 2016-09-21 | 2017-02-01 | 平越 | VR (virtual reality) entertainment system with time-length markers and method thereof |
CN106485782A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of reality scene is shown in virtual scene |
CN108109207B (en) * | 2016-11-24 | 2021-11-05 | 深圳市豪恩安全科技有限公司 | Visual three-dimensional modeling method and system |
US10452133B2 (en) * | 2016-12-12 | 2019-10-22 | Microsoft Technology Licensing, Llc | Interacting with an environment using a parent device and at least one companion device |
CN110741327B (en) * | 2017-04-14 | 2023-06-23 | 广州千藤文化传播有限公司 | Mud toy system and method based on augmented reality and digital image processing |
CN107343192A (en) * | 2017-07-20 | 2017-11-10 | 武汉市陆刻科技有限公司 | A kind of 3D solids interpolation model and VR mobile terminal interaction methods and system |
CN107441706A (en) * | 2017-08-17 | 2017-12-08 | 安徽迪万科技有限公司 | The sense of reality scene of game constructing system that virtual reality is combined with oblique photograph |
CN109922331B (en) * | 2019-01-15 | 2021-12-07 | 浙江舜宇光学有限公司 | Image processing method and device |
CN110266939B (en) * | 2019-05-27 | 2022-04-22 | 联想(上海)信息技术有限公司 | Display method, electronic device, and storage medium |
CN110267029A (en) * | 2019-07-22 | 2019-09-20 | 广州铭维软件有限公司 | A kind of long-range holographic personage's display technology based on AR glasses |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103035135A (en) * | 2012-11-27 | 2013-04-10 | 北京航空航天大学 | Children cognitive system based on augment reality technology and cognitive method |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9946076B2 (en) * | 2010-10-04 | 2018-04-17 | Gerard Dirk Smits | System and method for 3-D projection and enhancements for interactivity |
-
2014
- 2014-11-25 CN CN201410688094.6A patent/CN104392045B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103035135A (en) * | 2012-11-27 | 2013-04-10 | 北京航空航天大学 | Children cognitive system based on augment reality technology and cognitive method |
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
Non-Patent Citations (1)
Title |
---|
基于Kinect深度信息的实时三维重建和滤波算法研究;陈晓明;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130715;I138-1107页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104392045A (en) | 2015-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104392045B (en) | A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal | |
US9551871B2 (en) | Virtual light in augmented reality | |
CN100594519C (en) | Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera | |
CN107016704A (en) | A kind of virtual reality implementation method based on augmented reality | |
CN110866977B (en) | Augmented reality processing method, device, system, storage medium and electronic equipment | |
CN106355153A (en) | Virtual object display method, device and system based on augmented reality | |
CN110533780B (en) | Image processing method and device, equipment and storage medium thereof | |
CN111833458B (en) | Image display method and device, equipment and computer readable storage medium | |
CN108509887A (en) | A kind of acquisition ambient lighting information approach, device and electronic equipment | |
Girbacia et al. | Virtual restoration of deteriorated religious heritage objects using augmented reality technologies | |
CN108230384A (en) | Picture depth computational methods, device, storage medium and electronic equipment | |
US20180239514A1 (en) | Interactive 3d map with vibrant street view | |
CN106683163B (en) | Imaging method and system for video monitoring | |
US10296080B2 (en) | Systems and methods to simulate user presence in a real-world three-dimensional space | |
JP2023504608A (en) | Display method, device, device, medium and program in augmented reality scene | |
EP3533218A1 (en) | Simulating depth of field | |
US20110242271A1 (en) | Synthesizing Panoramic Three-Dimensional Images | |
CN116057577A (en) | Map for augmented reality | |
CN110096144B (en) | Interactive holographic projection method and system based on three-dimensional reconstruction | |
CN110390712B (en) | Image rendering method and device, and three-dimensional image construction method and device | |
Saggio et al. | Augmented reality for restoration/reconstruction of artefacts with artistic or historical value | |
CN116612256B (en) | NeRF-based real-time remote three-dimensional live-action model browsing method | |
CN112070901A (en) | AR scene construction method and device for garden, storage medium and terminal | |
Song et al. | Landscape Fusion Method Based on Augmented Reality and Multiview Reconstruction | |
Wu | Research on the application of computer virtual reality technology in museum cultural relics exhibition hall |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |