CN103777757A - System for placing virtual object in augmented reality by combining with significance detection - Google Patents
System for placing virtual object in augmented reality by combining with significance detection Download PDFInfo
- Publication number
- CN103777757A CN103777757A CN201410018560.XA CN201410018560A CN103777757A CN 103777757 A CN103777757 A CN 103777757A CN 201410018560 A CN201410018560 A CN 201410018560A CN 103777757 A CN103777757 A CN 103777757A
- Authority
- CN
- China
- Prior art keywords
- module
- image
- virtual object
- virtual
- saliency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 49
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 18
- 230000000694 effects Effects 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 8
- 230000000007 visual effect Effects 0.000 abstract description 7
- 230000003068 static effect Effects 0.000 abstract description 5
- 230000007547 defect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 19
- 230000004438 eyesight Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 230000002301 combined effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000008288 physiological mechanism Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a system for placing a virtual object in an augmented reality by combining with significance detection. The system comprises an image frame collecting module, a 3D (three-dimensional) object generation object, a display module, a tracing identification module and an image processing module, wherein the image frame collecting module is used for reading a static image, the 3D object generation module is used for generating the virtual object, the display model is used for displaying a virtual and real combining image, the tracing identification module is used for detecting the significance of each frame of image collected by the image frame collecting module and quickly generating the significance diagram of each image frame, the image processing module is respectively connected with the image frame collecting module, the tracing identification module, the 3D object generation module and the displaying module, and is used for placing the virtual object in the real scene on the basis of the significance detection. The system has the advantages that for a strange field or a strange image, the size and position of the virtual object can be automatically adjusted; a visual attention mechanism of human eyes is fully considered, and the generated augmented reality scene is dynamically converted, so the defects of the existing virtual object placing method are overcome, and the visual experience of an observer is improved.
Description
Technical Field
The invention relates to a system for placing virtual objects in augmented reality. And more particularly to a system for placing virtual objects in augmented reality in conjunction with saliency detection.
Background
A saliency detection technology and an augmented reality technology are involved in image processing. Wherein:
the significance detection technology comprises the following steps:
the attention and understanding of a scene by the human visual system is determined by a range of potentially different factors, where the significance of objects in an image is an important determinant. In image analysis and understanding, a salient region of an image is usually only a small portion of the image, while the image background occupies a larger portion of the image. The salient region is understood as a main object in the image, and is a region in which human vision can focus attention to arouse people's interest in a short time. The saliency detection technology is that a computer is used for analyzing images, simulating the above-described attention mechanism of human eyes, and calculating and extracting a salient region in one image. At the end of the nineties of the twentieth century, image salient feature analysis based on biological perception began to rise and became the focus of research in the field of biological visual perception. The method is based on a human visual attention mechanism, combines theoretical knowledge of human psychology and physiology, simulates human eye functions to establish an image significance extraction model, is driven by a biological physiological mechanism, and is based on acquisition of various early visual features, wherein the most representative algorithm is an Itti algorithm.
Augmented reality technology:
augmented Reality (AR) is a new technology developed on the basis of virtual Reality, and is also called mixed Reality. The technology that the information provided by a computer system increases the perception of a user to the real world is adopted, virtual information is applied to the real world, and computer-generated virtual objects, scenes or system prompt information is superposed into the real scenes, so that the reality is enhanced. Augmented reality generates virtual objects which do not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately places the virtual objects in the real environment through a sensing technology, integrates the virtual objects with the real environment by means of a display device, and presents a new environment with real sensory effect to a user. Therefore, the augmented reality system has the new characteristics of virtual-real combination, real-time interaction and three-dimensional registration.
Augmented reality currently applies to mobile phone applications, which allow users to see both the real world and virtual objects superimposed on the real world, and is a system combining a real environment and a virtual environment.
Nowadays, augmented reality technology has been gradually applied to the fields of military, medical treatment, architecture, education, engineering, movie and television, entertainment and the like. The augmented reality technology is to simply apply virtual information to the real world, superimpose virtual object, scene or system prompt information generated by a computer into a real scene, integrate a virtual object and the real environment into a whole by means of display equipment, and present a new environment with real sensory effect to a user, thereby realizing the enhancement of reality.
The existing augmented reality technology and related applications mostly adopt two methods as described below when a virtual object is placed in a real scene:
the first method comprises the following steps: the virtual object is placed at a predetermined specific target position in the scene, such as on the hands of a person in the scene to be augmented or on a fixed pattern appearing in the scene.
The second method comprises the following steps: and placing a virtual object in a fixed place of the display window, such as placing a virtual prompt or a 3D model in the upper right corner of the display window.
The two methods have respective suitable application fields, and have good effects in some special occasions, but in some occasions needing application of augmented reality, the two methods also have respective defects. When the first method is adopted to place the virtual object, if a preset target object or pattern is not available in the real scene, or the detected target and pattern are not clear enough, the virtual object will not appear in the real scene during display, on the other hand, when a preset target in the real scene, such as a hand of a person, is detected, the technology for detecting and identifying the target is complex, sometimes the calculation time is long, the real-time performance is affected, and when the preset target changes, false detection or missing detection occurs, and the effect of augmented reality is affected. When the second method for placing the virtual object is adopted, the virtual object is in a fixed place of the display window, on one hand, the virtual object may block some important contents in the real scene, and the observation effect is affected, and on the other hand, the virtual object is always fixed in position, so that when the real scene changes, the virtual object does not change along with the real scene, and the effect of virtual-real interaction does not exist, and a good effect of combining with the real scene cannot be generated.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a system for placing virtual objects in augmented reality, which fully considers the characteristics of an attention mechanism combined with human vision, adopts a relatively quick significance detection method, and automatically adjusts and places the virtual objects.
The technical scheme adopted by the invention is as follows: a system for placing virtual objects in augmented reality combined with saliency detection comprises an image frame acquisition module for reading in static images, a 3D object generation module for generating virtual objects and a display module for displaying virtual and real combined images, and is further provided with:
the tracking identification module is used for carrying out significance detection on each frame of image acquired by the image frame acquisition module and quickly generating a significance image of each frame of image frame;
and the image processing module is respectively connected with the image frame acquisition module, the tracking identification module, the 3D object generation module and the display module and is used for finishing the placement of the virtual object based on the significance detection in the real scene.
The tracking identification module is used for carrying out significance detection and is realized by adopting a significance detection model of Iit.
The processing steps of the image processing module are,
1) firstly, calculating the proportion of the number of pixels of a salient region detected by a tracking identification module to the number of pixels of the whole image;
2) adjusting the size of the virtual model generated by the 3D object generation module according to the calculated pixel proportion;
3) the virtual object is placed at the position, corresponding to the salient region, of the image acquired by the image frame acquisition module by referring to the position of the salient region detected by the tracking identification module;
4) and sending each frame of image with the virtual object to a display module for displaying.
When the display module displays, if the virtual object is expected to obtain the most attention of the observer, the image processing module adjusts the virtual object to be the same as the size of the salient region, and the virtual object is superposed on the position of the salient region in the real scene and then sent to the display module for displaying.
When the display module displays, if the virtual object is expected not to influence the main content in the image, the virtual object is placed beside the salient area or at a certain distance from the salient area and then sent to the display module for displaying.
In the processing process of the image processing module, when the size or the position of the salient region of the image detected by the tracking identification module changes, the image processing module repeats the steps 1) to 4) according to the change of the salient region to dynamically adjust the size and the position of the virtual object, so that the effect of real-time virtual-real combination is achieved.
The system for placing the virtual object in the augmented reality combined with the significance detection is combined with the significance detection method, so that the size and the position of the virtual object can be automatically adjusted for an unfamiliar scene or an image, the visual attention mechanism of human eyes is fully considered, the generated augmented reality scene is dynamically changed, the defects of the current method for placing the virtual object are overcome, the visual experience of an observer is improved, particularly in the application of placing the advertisement, the method has prominent advantages, different placing strategies are adopted, the placed virtual advertisement model can attract the maximum attention of the observer, or the advertisement is placed under the condition that the observer does not influence the main content of the scene, and the propaganda effect is achieved. Furthermore, the strategy for placing 3D models by the method is fully automatic and does not require human intervention.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention.
In the drawings
1: the image frame acquisition module 2: tracking identification module
3: the 3D object generation module 4: display module
5: image processing module
Detailed Description
A system for placing virtual objects in augmented reality in conjunction with saliency detection according to the present invention is described in detail with reference to the following embodiments and accompanying drawings.
The invention relates to a system for placing virtual objects in augmented reality combined with significance detection, which is provided by combining the existing significance detection technology and the augmented reality technology: in a given scene or image needing augmented reality, firstly, a human vision mechanism is simulated to carry out significance detection, and then a virtual object is automatically placed and adjusted according to a virtual-real combined effect which is specifically desired to be achieved by taking a detected significance region as a reference, so that the augmented reality effect which accords with a human eye attention mechanism is achieved. That is, according to the present invention, a saliency region of an input image is detected by a saliency detection method, and then the size and the position of placement of a virtual model are adjusted based on the detected saliency region, thereby achieving the final augmented reality effect.
As shown in fig. 1, a system for placing a virtual object in augmented reality with saliency detection according to the present invention includes:
the image frame acquisition module 1 is used for reading in a static image, providing a natural simple scene, is an input end of the whole system, can acquire an image by using a common camera, acquires the image or the scene which needs augmented reality, and transmits the acquired data of one frame of image into the system for further processing.
The tracking identification module 2 adopts the simplest, rapid and effective significance detection method, and has the main task of performing significance detection on the image, is connected with the image frame acquisition module 1, and is used for performing significance detection on each frame of image acquired by the image frame acquisition module 1 and rapidly generating a significance image of each frame of image frame. In other words, the saliency region in the input image is identified by using an image processing method, the central position of the saliency region and the size of the saliency region are accurately located, and the saliency detection by the module is realized by using a saliency detection model of Iit. The significance detection model technology of Iit is already very mature. The image acquired by the image acquisition frame module 1 is input into the saliency detection model of the Iiit, and the saliency detection model can quickly generate a saliency map of the frame image, wherein the saliency map comprises the central position of a saliency region, the size and the shape of the saliency region.
The 3D object generation module 3 is mainly used for importing a 3D model, generating a virtual object and superposing the virtual object on a static image frame in a later period; virtual objects to be superimposed on the real scene, such as 3D virtual mascot models required for advertising, trademark patterns, or other virtual objects such as fonts, are generated in the 3D object generation module 3. The 3D model can be generated by using common software such as 3 dmax.
And the display module 4 is used for displaying the virtual-real combined image, and is mainly used for displaying the image frame superposed with the virtual object by adopting a computer display, so that a virtual-real combined effect is presented to people.
The image processing module 5 is respectively connected with the image frame acquisition module 1, the tracking identification module 2, the 3D object generation module 3 and the display module 4, and is used for superposing the virtual object of the augmented reality to complete the placement of the virtual object based on the significance detection in the real scene. I.e. for superimposing the produced virtual model on the static image. The processing steps of the image processing module 5 are,
1) firstly, calculating the proportion of the number of pixels of a salient region detected by a tracking identification module 2 to the number of pixels of the whole image;
2) adjusting the size of the virtual model generated by the 3D object generation module 3 according to the calculated pixel proportion;
3) the virtual object is placed at the position of the image acquired by the image frame acquisition module 1 corresponding to the salient region by referring to the position of the salient region detected by the tracking identification module 2;
4) each frame of image with the virtual object placed is sent to the display module 4 for display.
Wherein,
when the display module 4 displays the virtual object, if the virtual object is expected to obtain the most attention of the observer, the image processing module 5 adjusts the virtual object to the size same as the size of the salient region, and the virtual object is superimposed on the position of the salient region in the real scene and then sent to the display module 4 for display.
When the display module 4 displays, if the virtual object is expected not to affect the main content in the image, the virtual object is placed beside the salient region or at a certain distance from the salient region, and then is sent to the display module 4 for display.
In the processing process of the image processing module 5, when the size or the position of the salient region of the image detected by the tracking and identifying module 2 changes, the image processing module 5 repeats the steps 1) to 4) according to the change of the salient region to dynamically adjust the size and the position of the virtual object, so that the effect of real-time virtual-real combination is achieved.
The system for placing the virtual object in the augmented reality, which combines the significance detection, can obviously overcome the defects of two methods for placing the virtual object in the prior art in some application scenes. The invention has the outstanding characteristics that the virtual object is placed by combining the significance detection technology: when the augmented reality system needs to place a virtual object in a real scene or a picture to generate an augmented reality effect, firstly, a saliency detection technology is adopted to detect the image or a salient region in the scene, then the detected saliency region is taken as a reference, corresponding virtual objects are placed at different positions relative to the salient region according to the effect required to be achieved by augmented reality, and the size of the virtual object is adjusted according to the size of the salient region. The salient features of the invention in certain augmented reality applications are illustrated below.
One very suitable application field of the invention is the placement of the virtual advertisement model in a real scene: now, assuming that the main content in a scene is a person, the background part is relatively single, and now a virtual 3D model of a certain brand of mascot needs to be added in the scene through an augmented reality system to achieve the best advertising effect. The processing steps of the system of the invention are as follows: firstly, saliency detection is carried out on the scene, the detected saliency area is the area occupied by people in the scene, then the size of the virtual mascot is adjusted according to the proportion of the area occupying the whole scene, then the 3D model of the mascot is placed at the position of the people in the scene, and therefore the virtual mascot model is located at the most salient position in the scene, namely the position which can attract the attention of the eyes of the people, and therefore the advertisement model can attract the attention of the viewers better. In conclusion, the placing method can achieve the best advertisement effect, enables the placed virtual advertisement model to attract the most attention of viewers, and is naturally fused with the scene, so that the best advertisement effect and the experience of virtual-real interaction are achieved.
In addition to the above described method of placing virtual advertising models in re-salient regions, there are additional placement strategies that are applicable to other situations and requirements: for example, a scene that wants to add a virtual model is not changed, and the main content, namely a salient region or a person, is still a person, but at the moment, the person in the scene is considered as important content in the real scene, and the virtual mascot model is not expected to block the person and destroy the content of the real scene. In this case, the salient region of the scene is also detected, the detected salient region is the region occupied by the person, and then the virtual mascot is placed beside the region or at another position in the non-salient region, so that the virtual model does not affect the main content of the scene. The important content in a scene or a picture is often the salient area, and the placing strategy can be adopted when the important content is not expected to be blocked and the virtual mascot model is expected to be placed.
In summary, the system for placing virtual objects in augmented reality in combination with saliency detection of the present invention fully considers the characteristics of attention mechanism in combination with human vision, adopts a relatively fast saliency detection method, automatically adjusts and places virtual objects, and can enable viewers to achieve the best experience of virtual-real combination in some application fields of augmented reality, such as placing virtual advertisement models.
Claims (6)
1. A system for placing virtual objects in augmented reality combined with saliency detection, comprising an acquired image frame module (1) for reading in still images, a 3D object generation module (3) for generating virtual objects and a display module (4) for displaying virtual-real combined images, characterized in that it is further provided with:
the tracking identification module (2) is used for carrying out significance detection on each frame of image acquired by the image frame acquisition module (1) and quickly generating a significance image of each frame of image frame;
the image processing module (5) is respectively connected with the image frame acquisition module (1), the tracking identification module (2), the 3D object generation module (3) and the display module (4) and is used for finishing the placement of the virtual object based on the significance detection in the real scene.
2. A system for virtual object placement in augmented reality combined with saliency detection as claimed in claim 1, characterized in that said tracking recognition module (2) performs saliency detection using a saliency detection model of Iiit.
3. A system for virtual object placement in augmented reality with saliency detection as claimed in claim 1, characterized in that the processing steps of said image processing module (5) are,
1) firstly, calculating the proportion of the number of pixels of a salient region detected by a tracking identification module (2) to the number of pixels of the whole image;
2) adjusting the size of the virtual model generated by the 3D object generation module (3) according to the calculated pixel proportion;
3) the position of the salient region detected by the reference tracking identification module (2) is referred, and a virtual object is placed at the position, corresponding to the salient region, of the image acquired by the image frame acquisition module (1);
4) and sending each frame of image with the virtual object to a display module (4) for displaying.
4. A system for placing virtual objects in augmented reality with detection of saliency as claimed in claim 3, characterized in that when the display module (4) is displaying, if the virtual object is expected to get the most attention of the observer, the image processing module (5) adjusts the virtual object to the same size as the saliency region, superimposes the virtual object on the position of the saliency region in the real scene and sends it to the display module (4) for display.
5. A system for placing virtual objects in augmented reality with saliency detection as claimed in claim 3, characterized in that when the display module (4) is displaying, if the virtual object is desired not to affect the main content in the image, the virtual object is placed beside the saliency area or at a distance from the saliency area before being sent to the display module (4) for displaying.
6. The system for placing the virtual object in the augmented reality in combination with the saliency detection as claimed in claim 3, wherein in the processing process of the image processing module (5), when the size or the position of the saliency region of the image detected by the tracking recognition module (2) changes, the image processing module (5) repeats the steps 1) to 4) to dynamically adjust the size and the position of the virtual object according to the change of the saliency region, so as to achieve the effect of real-time virtual-real combination.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410018560.XA CN103777757B (en) | 2014-01-15 | 2014-01-15 | A kind of place virtual objects in augmented reality the system of combination significance detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410018560.XA CN103777757B (en) | 2014-01-15 | 2014-01-15 | A kind of place virtual objects in augmented reality the system of combination significance detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103777757A true CN103777757A (en) | 2014-05-07 |
CN103777757B CN103777757B (en) | 2016-08-31 |
Family
ID=50570100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410018560.XA Expired - Fee Related CN103777757B (en) | 2014-01-15 | 2014-01-15 | A kind of place virtual objects in augmented reality the system of combination significance detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103777757B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537689A (en) * | 2014-12-25 | 2015-04-22 | 中国科学院自动化研究所 | Target tracking method based on local contrast prominent union features |
CN105787993A (en) * | 2014-12-09 | 2016-07-20 | 财团法人工业技术研究院 | Augmented reality method and system |
WO2017041731A1 (en) * | 2015-09-11 | 2017-03-16 | Huawei Technologies Co., Ltd. | Markerless multi-user multi-object augmented reality on mobile devices |
CN107818596A (en) * | 2016-09-14 | 2018-03-20 | 阿里巴巴集团控股有限公司 | A kind of scenario parameters determine method, apparatus and electronic equipment |
CN107895312A (en) * | 2017-12-08 | 2018-04-10 | 快创科技(大连)有限公司 | A kind of shopping online experiencing system based on AR technologies |
CN108762602A (en) * | 2018-04-03 | 2018-11-06 | 维沃移动通信有限公司 | A kind of method that image is shown and terminal device |
US10325406B2 (en) | 2016-11-11 | 2019-06-18 | Industrial Technology Research Institute | Image synthesis method and image synthesis device for virtual object |
CN109933194A (en) * | 2019-03-05 | 2019-06-25 | 郑州万特电气股份有限公司 | To the exchange method of virtual target object in a kind of mixed reality environment |
WO2019158129A1 (en) * | 2018-02-13 | 2019-08-22 | 中兴通讯股份有限公司 | Method and device for augmented reality visual element display |
CN110415005A (en) * | 2018-04-27 | 2019-11-05 | 华为技术有限公司 | Determine the method, computer equipment and storage medium of advertisement insertion position |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101923809A (en) * | 2010-02-12 | 2010-12-22 | 黄振强 | Interactive augment reality jukebox |
US20110026808A1 (en) * | 2009-07-06 | 2011-02-03 | Samsung Electronics Co., Ltd. | Apparatus, method and computer-readable medium generating depth map |
CN103336947A (en) * | 2013-06-21 | 2013-10-02 | 上海交通大学 | Method for identifying infrared movement small target based on significance and structure |
CN103366610A (en) * | 2013-07-03 | 2013-10-23 | 熊剑明 | Augmented-reality-based three-dimensional interactive learning system and method |
-
2014
- 2014-01-15 CN CN201410018560.XA patent/CN103777757B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110026808A1 (en) * | 2009-07-06 | 2011-02-03 | Samsung Electronics Co., Ltd. | Apparatus, method and computer-readable medium generating depth map |
CN101923809A (en) * | 2010-02-12 | 2010-12-22 | 黄振强 | Interactive augment reality jukebox |
CN103336947A (en) * | 2013-06-21 | 2013-10-02 | 上海交通大学 | Method for identifying infrared movement small target based on significance and structure |
CN103366610A (en) * | 2013-07-03 | 2013-10-23 | 熊剑明 | Augmented-reality-based three-dimensional interactive learning system and method |
Non-Patent Citations (1)
Title |
---|
钟慧娟等: "增强现实系统及其关键技术研究", 《计算机仿真》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI628613B (en) * | 2014-12-09 | 2018-07-01 | 財團法人工業技術研究院 | Augmented reality method and system |
CN105787993A (en) * | 2014-12-09 | 2016-07-20 | 财团法人工业技术研究院 | Augmented reality method and system |
CN105787993B (en) * | 2014-12-09 | 2018-12-07 | 财团法人工业技术研究院 | Augmented reality method and system |
CN104537689B (en) * | 2014-12-25 | 2017-08-25 | 中国科学院自动化研究所 | Method for tracking target based on local contrast conspicuousness union feature |
CN104537689A (en) * | 2014-12-25 | 2015-04-22 | 中国科学院自动化研究所 | Target tracking method based on local contrast prominent union features |
US9928656B2 (en) | 2015-09-11 | 2018-03-27 | Futurewei Technologies, Inc. | Markerless multi-user, multi-object augmented reality on mobile devices |
WO2017041731A1 (en) * | 2015-09-11 | 2017-03-16 | Huawei Technologies Co., Ltd. | Markerless multi-user multi-object augmented reality on mobile devices |
CN107818596A (en) * | 2016-09-14 | 2018-03-20 | 阿里巴巴集团控股有限公司 | A kind of scenario parameters determine method, apparatus and electronic equipment |
CN107818596B (en) * | 2016-09-14 | 2021-08-03 | 阿里巴巴集团控股有限公司 | Scene parameter determination method and device and electronic equipment |
US10325406B2 (en) | 2016-11-11 | 2019-06-18 | Industrial Technology Research Institute | Image synthesis method and image synthesis device for virtual object |
CN107895312A (en) * | 2017-12-08 | 2018-04-10 | 快创科技(大连)有限公司 | A kind of shopping online experiencing system based on AR technologies |
WO2019158129A1 (en) * | 2018-02-13 | 2019-08-22 | 中兴通讯股份有限公司 | Method and device for augmented reality visual element display |
CN108762602A (en) * | 2018-04-03 | 2018-11-06 | 维沃移动通信有限公司 | A kind of method that image is shown and terminal device |
CN110415005A (en) * | 2018-04-27 | 2019-11-05 | 华为技术有限公司 | Determine the method, computer equipment and storage medium of advertisement insertion position |
CN109933194A (en) * | 2019-03-05 | 2019-06-25 | 郑州万特电气股份有限公司 | To the exchange method of virtual target object in a kind of mixed reality environment |
Also Published As
Publication number | Publication date |
---|---|
CN103777757B (en) | 2016-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103777757B (en) | A kind of place virtual objects in augmented reality the system of combination significance detection | |
US10089794B2 (en) | System and method for defining an augmented reality view in a specific location | |
CN106355153B (en) | A kind of virtual objects display methods, device and system based on augmented reality | |
CN106157359B (en) | Design method of virtual scene experience system | |
Sandor et al. | An augmented reality x-ray system based on visual saliency | |
US20190371072A1 (en) | Static occluder | |
KR101822471B1 (en) | Virtual Reality System using of Mixed reality, and thereof implementation method | |
US10235806B2 (en) | Depth and chroma information based coalescence of real world and virtual world images | |
JP2022002141A (en) | Video display device, video projection device, and methods and programs thereof | |
EP2738743A3 (en) | Generating and reproducing augmented reality contents | |
KR20140082610A (en) | Method and apaaratus for augmented exhibition contents in portable terminal | |
CN105139349A (en) | Virtual reality display method and system | |
CN108833877B (en) | Image processing method and device, computer device and readable storage medium | |
KR20130089649A (en) | Method and arrangement for censoring content in three-dimensional images | |
WO2018036113A1 (en) | Augmented reality method and system | |
CN105611267B (en) | Merging of real world and virtual world images based on depth and chrominance information | |
JP2020513704A (en) | Video data processing method, apparatus and equipment | |
CN107393018A (en) | A kind of method that the superposition of real-time virtual image is realized using Kinect | |
JP2013109469A (en) | Apparatus, method, and program for image processing | |
KR20140126529A (en) | Physical Movement of Object on Reality-Augmented Reality Interaction System and Implementation Method for Electronic book | |
CN105487660A (en) | Immersion type stage performance interaction method and system based on virtual reality technology | |
JP2021018575A5 (en) | Image distribution system and image distribution method | |
CN106774869B (en) | Method and device for realizing virtual reality and virtual reality helmet | |
WO2017147826A1 (en) | Image processing method for use in smart device, and device | |
KR20150116032A (en) | Method of providing augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160831 Termination date: 20210115 |