CN103543827B - Based on the implementation method of the immersion outdoor activities interaction platform of single camera - Google Patents

Based on the implementation method of the immersion outdoor activities interaction platform of single camera Download PDF

Info

Publication number
CN103543827B
CN103543827B CN201310479754.5A CN201310479754A CN103543827B CN 103543827 B CN103543827 B CN 103543827B CN 201310479754 A CN201310479754 A CN 201310479754A CN 103543827 B CN103543827 B CN 103543827B
Authority
CN
China
Prior art keywords
virtual
outdoor
real
user
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310479754.5A
Other languages
Chinese (zh)
Other versions
CN103543827A (en
Inventor
徐坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Miaomi Intelligent Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201310479754.5A priority Critical patent/CN103543827B/en
Publication of CN103543827A publication Critical patent/CN103543827A/en
Application granted granted Critical
Publication of CN103543827B publication Critical patent/CN103543827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

Based on the implementation method of the immersion outdoor activities interaction platform of single camera, based on mixed reality technology, in conjunction with computer graphics techniques, a kind of man-machine interaction method being applied in outdoor large screen is provided, by high-definition camera acquisition technique and skeleton reconstruction technique, the bone of real-time calculating user, when the region of interest of specifying arrives toggle area, then automatically trigger the information of the augmented reality arranged in advance, and the two-dimensional picture of user's pose presentation in the reality collected and enhancing or three dimensional virtual models are superposed, by outdoor high definition LED large screen display to user.Finally, user can see in giant-screen truly with the virtual interaction effect combined.

Description

Based on the implementation method of the immersion outdoor activities interaction platform of single camera
Technical field
The invention belongs to computer image processing technology field, relate to mixed reality technology, for a kind of especially towards the man-machine interaction method based on mixed reality of outdoor large screen.
Background technology
The mixed reality application of current main-stream is mostly for developing with sub-screen lower than 30 inches, due to the requirement of outdoor activities, generally all adopts the LED large high-definition screen curtain being at least greater than 100 inches as displaying screen.On the other hand, current main user action limbs interactive mode calculates in conjunction with infrared and depth finding mode with graphics mode for only relying on graphics mode to calculate, the more famous equipment of this mode is the Kinect of Microsoft, which requires that user is at certain range of motion, mensuration can be calculated comparatively accurately, because this equipment is for the TV screen lower than 60 inches prepares, the optimum distance of therefore this equipment work is set in 2 to 3 meters of distance screen.For the giant-screen of more than at least 100 inches, open air, this distance obviously can not meet the demands, the present invention adopts and only relies on the skeleton method for reconstructing of iconology to calculate, with the use of high-definition camera, can the requirement of outdoor large screen, the activity space of user is set in viewing ratio.
Summary of the invention
The present invention is directed to the problems of the prior art, in conjunction with computer graphics techniques and mixed reality technology, a kind of man-machine interaction method being applied in outdoor large screen is provided, by high-definition camera acquisition technique and skeleton reconstruction technique, the bone of real-time calculating user, and realize human-computer interaction function with result of calculation.Finally, user can see in giant-screen truly with the virtual interaction effect combined.
Technical scheme of the present invention is: based on the implementation method of the immersion outdoor activities interaction platform of single camera, based on mixed reality technology, user's pose presentation input mixed reality server that single camera in real world is gathered, superpose with the enhancing content set in advance in mixed reality server, and be shown to user by outdoor large screen, described enhancing content comprises two-dimensional picture and three dimensional virtual models, user utilizes different attitudes to carry out interaction with the enhancing content of setting in advance, and the implementation method of described interaction platform specifically comprises the following steps:
First perform registration training, comprise two steps:
1) preparatory stage, registration is carried out to the real world of open air and the virtual world of mixed reality server:
11) standardized bar is perpendicular to the perpendicular line of outdoor large screen on the ground, and intersection point is positioned at outdoor large screen base mid point;
12) on described perpendicular line, determine user interactions anchor point, user interactions anchor point and outdoor large screen distance are datum length L1, and datum length L1 is greater than the height of outdoor large screen;
13) real camera is positioned over the position higher than 1.5 meters above user interactions anchor point, real camera lens direction keeps level and towards outdoor large screen, namely video camera content of shooting is whole giant-screen; The interaction scenarios corresponding to real world is set in virtual world, comprise and the virtual outdoor giant-screen of outdoor large screen same size and same position in reality, the mutual anchor point of Virtual User and virtual video camera, the distance of virtual video camera and virtual outdoor giant-screen is datum length L1 ', L1 ' equals L1, and the content of virtual video camera shooting is whole virtual big screen; Virtual video camera is superimposed with the picture captured by real camera, using the outdoor large screen of real world as object of reference, to the registration that the size of the outdoor large screen of real world and the size of virtual outdoor giant-screen are carried out at least one times;
14) in real world, real camera is moved to the working position being positioned at outdoor large screen side or top by user interactions anchor point, and on user interactions anchor point, with perpendicular to ground, also be namely parallel to outdoor large screen and establish a station meter L2, the height of L2 is more than or equal to 1 meter;
15) working position of step 14) real camera is corresponded in mobile virtual video camera to virtual world, and on user interactions anchor point in virtual world, with perpendicular to ground, establish one with the corresponding virtual scale bar L2 ' of real world, the height of L2 ' is more than or equal to 1 meter and equals L2, with true station meter L2 for object of reference, registration is carried out to true station meter L2 and virtual scale bar L2 ';
16) position and the angle information of now virtual world scene is preserved;
2) the registration stage is supplemented:
21) in real world, real camera is positioned over the working position at outdoor large screen side or top, the interactive placement of user is selected according to the image-capture field of real camera, triggering icon is sticked at selected interactive placement place, simultaneously using the scope of activities as user within the scope of triggering graph target, the action namely exceeding this scope of activities will can not get identifying;
22) in virtual world, same position place corresponding to real world sets virtual triggering icon, be designated as object of reference with true triggering graph and carry out registration, obtaining after this registration is adjusted the adjustment information of scene location and angle with the preparatory stage 1) position of virtual world scene that obtains after registration and angle information merge, and obtains optimum scene location and angle information;
After completing registration training, enter the real-time follow-up stage;
3) the real-time follow-up stage:
31) subscriber station is after triggering graph is put on, and is obtained trigger icon and to be blocked information by real camera, by the relevant information in the automatic or manual triggering virtual world of system, and is superimposed to the image that real world gathers, obtains the effect of mixed reality;
32) according to the image that image acquisition device arrives, rebuild the three-dimensional motion bone of user, after the position information obtaining user, the enhancing information of setting is in advance carried out according to the position information of the requirement set with user alternately.
Described in step 13), the registration of the true size of outdoor large screen and the size of virtual outdoor giant-screen is: the image take with virtual video camera of picture taken by real camera merges, the image of acquisition mixed reality, and out of doors giant-screen shows; According to the ratio of the image shown by convergent-divergent adjustment virtual world, virtual outdoor giant-screen is obtained with true outdoor large screen and overlaps, preserve and fix virtual world engineer's scale now.
Described in step 15), the registration of true station meter L2 and virtual scale bar L2 ' is: merged by the image that the picture that real camera is taken is taken with virtual video camera, obtain the image of mixed reality, and out of doors giant-screen shows, by finely tuning position and the angle of virtual video camera, virtual scale bar L2 ' is overlapped with true station meter L2, and virtual outdoor giant-screen center line and true outdoor large screen center line overlap; Record is carried out in the position of virtual video camera now and angle, and is set to final working position and angle.
Described outdoor large screen is be not less than the LED screen of 100 inches, and resolution is more than or equal to 720P.
Described real camera is high-definition camera, and resolution is more than or equal to 720P.
In the real-time follow-up stage, in the plane picture that real camera gathers, delimit toggle area, when the three-dimensional motion bone site data of user are in toggle area, trigger the corresponding augmented reality content information set in advance, show to user.
Step 32) in, adopt existing algorithm " the human body three-dimensional motion skeleton of motion image sequence is rebuild ", in the first frame picture, user's bone is demarcated, utilize three-dimensional (3 D) manikin knowledge and motion continuity to set up the human motion skeleton of each image successively afterwards.
The present invention, in conjunction with computer graphics techniques and mixed reality technology, provides a kind of man-machine interaction method being applied in outdoor large screen.The present invention utilizes two-stage advanced formula method for registering, provides a kind of efficient method for registering for outdoor immersion man-machine interactive system of novelty.The present invention only utilizes a high-definition camera to complete registration until whole module of work, compares the legacy system of multiple video camera, obtaining identical effect simultaneously, greatly reducing the complexity of system, improve the operational efficiency of system.The present invention adopts the means of station meter to position auxiliary in the proposition of registration stage novelty, while the difficulty reducing registration process, improve efficiency and the precision in registration stage greatly, the present invention is by high-definition camera acquisition technique and skeleton reconstruction technique, the bone framework of real-time calculating user, and realize human-computer interaction function with result of calculation.Finally, user can see in giant-screen truly with the virtual interaction effect combined.In the same user interaction stage, the method of novel toggle area is adopted to trigger relevant interactive information, thisly can make up the not high inferior position of skeleton gathers in outdoor situation degree of accuracy, the difficulty of the collection reduced, improve the speed of collection, still can meet the experience effect of interactive person simultaneously.
Accompanying drawing explanation
Fig. 1 is workflow diagram of the present invention.
Fig. 2 is registration process flow diagram of the present invention, comprises the preparatory stage of registration and supplementary registration stage.
Fig. 3 is device schematic diagram of the present invention.
Fig. 4 is that step of registration one of the present invention implements schematic diagram.
Fig. 5 is that in step of registration of the present invention, step 15) implements schematic diagram.
Fig. 6 is that supplementary registration of the present invention implements schematic diagram.
System schematic when Fig. 7 is real work of the present invention.
Fig. 8 is real-time follow-up stage etch 32 of the present invention) in the schematic diagram of interactive triggering method.
Embodiment
The skeleton reconstruction technique of forefront combines with hybrid technology by the present invention, by high-definition camera acquisition technique, calculates the bone framework of user in real time, and realizes human-computer interaction function with result of calculation.Finally, user can see in high definition outdoor LED giant-screen truly with the virtual interaction effect combined, high definition refers to that resolution is more than or equal to 720P here.
The present invention is by arranging real camera 101, mixed reality application server 102 and outdoor large screen 103, real camera 101 is high-definition image acquisition device, gather the whole body images of user, in input mixed reality application server 102, in mixed reality application server 102, three-dimensional reconstruction is carried out to the bone of the user gathered in image, judge the hand of user, the positional information of the key points such as pin, when these key points move to the trigger range preset, namely the enhancing information automatically triggering or be correlated with by manual activation, and these information are presented on outdoor large screen 103, outdoor large screen 103 is high definition LED screen.Enhancing information mixes mutually with original image collected, and realizes the effect of mixed reality, and be presented on by mixed image on outdoor LED high definition display large-size screen monitors, as Fig. 6, described image collector is set to a video camera or a camera.As Fig. 3, Fig. 4, Fig. 5, Fig. 6 and Fig. 7, real camera 101 carries out real time image collection, transfers in mixed reality application server 102 by the data after gathering, and the information of the enhancing information of triggering with the user 504 gathered is combined, and the image of mixing is presented on outdoor large screen 103, finally realize user 504 and carry out mutual effect with virtual enhancing information.
Illustrate enforcement of the present invention below.
The present invention is based on mixed reality technology, user images in the reality of collection superposed with the three dimensional virtual models being used for mutual object set in advance, and is shown to user, as Fig. 7, comprise the following steps:
1) preparatory stage, registration is carried out, as Fig. 4 to outdoor real world and virtual world:
11) standardized bar is perpendicular to the perpendicular line of outdoor large screen 103 on the ground, and intersection point is positioned at outdoor large screen base mid point;
12) on described perpendicular line, determine user interactions anchor point, user interactions anchor point and outdoor large screen distance are datum length L1202, and datum length L1 should be greater than the height of outdoor large screen in principle;
13) real camera is positioned over the position higher than 1.5 meters above user interactions anchor point, real camera lens direction keeps level and towards outdoor large screen; The interaction scenarios corresponding to real world is set in the virtual world in mixed reality server, comprise and the virtual outdoor giant-screen of outdoor large screen same size and same position in reality, the mutual anchor point of Virtual User and virtual video camera, the distance of virtual video camera and virtual outdoor giant-screen is datum length L1 ', L1 ' equals L1, then carries out registration at least one times to the size of true outdoor large screen and the size of virtual outdoor giant-screen;
14) in real world, real camera is moved to the working position being positioned at outdoor large screen side or top by user interactions anchor point, and on user interactions anchor point, with perpendicular to ground, also be namely parallel to outdoor large screen and establish a station meter L2, the height of L2 is more than or equal to 1 meter;
15) working position of step 14) real camera is corresponded in mobile virtual video camera to virtual world, and on user interactions anchor point in virtual world, with perpendicular to ground, establish one with the corresponding virtual scale bar L2 ' of real world, the height of L2 ' is more than or equal to 1 meter and equals L2, with true station meter L2 for object of reference, registration is carried out to true station meter L2 and virtual scale bar L2 ';
During step 13) registration, the image take with virtual video camera of picture taken by real camera merges, the image of acquisition mixed reality, and out of doors giant-screen shows; By the ratio of convergent-divergent virtual world, virtual big screen is obtained with true giant-screen and overlaps, preserve and fix virtual world engineer's scale now;
Described in step 15), the registration of true station meter L2 and virtual scale bar L2 ' is: merged by the image that the picture that real camera is taken is taken with virtual video camera, obtain the image of mixed reality, and out of doors giant-screen shows, by finely tuning position and the angle of virtual video camera, virtual scale bar L2 ' is overlapped with true station meter L2, and virtual outdoor giant-screen center line and true outdoor large screen center line overlap; Record is carried out in the position of virtual video camera now and angle, and is set to final working position and angle.
2) the registration stage is supplemented, as Fig. 6:
21) in real world, real camera is positioned over the working position at outdoor large screen side or top, the interactive placement of user is selected according to the image-capture field of real camera, stick at selected interactive placement place and trigger icon 402, simultaneously using the scope of activities as user in the scope triggering icon 402, the action namely exceeding this scope of activities will can not get identifying;
22) in virtual world, same position place corresponding to real world sets virtual triggering icon, by fine setting, virtual triggering icon is marked in the picture after being mixed with real world and virtual world to overlap completely with real triggering graph, to obtain after fine setting the adjustment information of scene location and angle with the preparatory stage 1) position of virtual world scene that obtains after registration and angle information merge, and obtains optimum scene location and angle information.
3) the real-time follow-up stage, as Fig. 7:
31) user 504 stands in after triggering graph puts on, obtained by real camera 101 and trigger icon and to be blocked information, by the relevant information in the automatic or manual triggering virtual world of system, by the image that enhancing information superposition gathers to real world, and be presented on outdoor large screen 103, obtain the interaction effect of mixed reality.
32) according to the image that image acquisition device arrives, the three-dimensional motion bone of user is rebuild.After the position information obtaining user, enhancing information is carried out with user according to the requirement set in advance alternately.
Step 32) in, adopt existing algorithm " the human body three-dimensional motion skeleton of motion image sequence is rebuild ", in the first frame picture, user's bone is demarcated, utilize three-dimensional (3 D) manikin knowledge and motion continuity to set up the human motion skeleton of each image successively afterwards.Based on user's three-dimensional motion bone of real-time reconstruction, calculate the six-degree-of-freedom information of the corresponding position of user and angle, for determining the position information of user.
In gathered plane picture, delimit toggle area, as Fig. 8, when the corresponding bone site data of user are in toggle area, as 601,602, then can trigger the corresponding augmented reality content information set in advance, as the virtual fireworks effects etc. of blast.
From above-mentioned, the present invention is specially for the feature that the open air of mixed reality is mutual, and propose a kind of localization method of applicable outdoor feature, the method flexibility ratio is high, and matching is good, does not need complicated device, is easy to realize.The health of user is carried out three-dimensional skeleton model reconstruction, utilize the data after rebuilding, judge whether the region of interest of user reaches the toggle area in the plane of delineation, once arrive, automatically trigger corresponding information, the accuracy requirement that this mode is rebuild three skeleton models is not high yet.The present invention is conducive to the popularization that mixed reality is applied in outdoor interactive application, and device structure is simple, is easy to realize, by the interactive experience that user is personal, by user's self-obtaining information.

Claims (6)

1. based on the implementation method of the immersion outdoor activities interaction platform of single camera, it is characterized in that based on mixed reality technology, user's pose presentation input mixed reality server that single camera in real world is gathered, superpose with the enhancing content set in advance in mixed reality server, and be shown to user by outdoor large screen, described enhancing content comprises two-dimensional picture and three dimensional virtual models, user utilizes different attitudes to carry out interaction with the enhancing content of setting in advance, and the implementation method of described interaction platform specifically comprises the following steps:
First perform registration training, comprise two steps:
1) preparatory stage, registration is carried out to the real world of open air and the virtual world of mixed reality server:
11) standardized bar is perpendicular to the perpendicular line of outdoor large screen on the ground, and intersection point is positioned at outdoor large screen base mid point;
12) on described perpendicular line, determine user interactions anchor point, user interactions anchor point and outdoor large screen distance are datum length L1, and datum length L1 is greater than the height of outdoor large screen;
13) real camera is positioned over the position higher than 1.5 meters above user interactions anchor point, real camera lens direction keeps level and towards outdoor large screen, namely video camera content of shooting is whole giant-screen; The interaction scenarios corresponding to real world is set in virtual world, comprise and the virtual outdoor giant-screen of outdoor large screen same size and same position in reality, the mutual anchor point of Virtual User and virtual video camera, the distance of virtual video camera and virtual outdoor giant-screen is datum length L1 ', L1 ' equals L1, and the content of virtual video camera shooting is whole virtual big screen; Virtual video camera is superimposed with the picture captured by real camera, using the outdoor large screen of real world as object of reference, to the registration that the size of the outdoor large screen of real world and the size of virtual outdoor giant-screen are carried out at least one times;
14) in real world, real camera is moved to the working position being positioned at outdoor large screen side or top by user interactions anchor point, and on user interactions anchor point, with perpendicular to ground, also be namely parallel to outdoor large screen and establish a station meter L2, the height of L2 is more than or equal to 1 meter;
15) step 14 is corresponded in mobile virtual video camera to virtual world) working position of real camera, and on user interactions anchor point in virtual world, with perpendicular to ground, establish one with the corresponding virtual scale bar L2 ' of real world, the height of L2 ' is more than or equal to 1 meter and equals L2, with true station meter L2 for object of reference, registration is carried out to true station meter L2 and virtual scale bar L2 ';
16) position and the angle information of now virtual world scene is preserved;
2) the registration stage is supplemented:
21) in real world, real camera is positioned over the working position at outdoor large screen side or top, the interactive placement of user is selected according to the image-capture field of real camera, triggering icon is sticked at selected interactive placement place, simultaneously using the scope of activities as user within the scope of triggering graph target, the action namely exceeding this scope of activities will can not get identifying;
22) in virtual world, same position place corresponding to real world sets virtual triggering icon, be designated as object of reference with true triggering graph and carry out registration, obtaining after this registration is adjusted the adjustment information of scene location and angle with the preparatory stage 1) position of virtual world scene that obtains after registration and angle information merge, and obtains optimum scene location and angle information;
After completing registration training, enter the real-time follow-up stage;
3) the real-time follow-up stage:
31) subscriber station is after triggering graph is put on, and is obtained trigger icon and to be blocked information by real camera, by the relevant information in the automatic or manual triggering virtual world of system, and is superimposed to the image that real world gathers, obtains the effect of mixed reality;
32) according to the image that image acquisition device arrives, rebuild the three-dimensional motion bone of user, after the position information obtaining user, the enhancing information of setting is in advance carried out according to the position information of the requirement set with user alternately.
2. the implementation method of the immersion outdoor activities interaction platform based on single camera according to claim 1, it is characterized in that step 13) size of outdoor large screen of described real world and the registration of the size of virtual outdoor giant-screen be: the image that the picture that real camera is taken is taken with virtual video camera merged, obtain the image of mixed reality, and out of doors giant-screen shows; According to the ratio of the image shown by convergent-divergent adjustment virtual world, make virtual outdoor giant-screen obtain coincidence with the outdoor large screen of real world, preserve and fix virtual world engineer's scale now.
3. the implementation method of the immersion outdoor activities interaction platform based on single camera according to claim 1, it is characterized in that step 15) registration of described true station meter L2 and virtual scale bar L2 ' is: the image that the picture that real camera is taken is taken with virtual video camera merged, obtain the image of mixed reality, and out of doors giant-screen shows, by finely tuning position and the angle of virtual video camera, virtual scale bar L2 ' is overlapped with true station meter L2, and the outdoor large screen center line of virtual outdoor giant-screen center line and real world overlaps; Record is carried out in the position of virtual video camera now and angle, and is set to final working position and angle.
4. the implementation method of the immersion outdoor activities interaction platform based on single camera according to any one of claim 1-3, it is characterized in that described outdoor large screen is be not less than the LED screen of 100 inches, resolution is more than or equal to 720P.
5. the implementation method of the immersion outdoor activities interaction platform based on single camera according to any one of claim 1-3, it is characterized in that described real camera is high-definition camera, resolution is more than or equal to 720P.
6. the implementation method of the immersion outdoor activities interaction platform based on single camera according to any one of claim 1-3, it is characterized in that in the real-time follow-up stage, toggle area delimited in the plane picture that real camera gathers, when the three-dimensional motion bone site data of user are in toggle area, trigger the corresponding augmented reality content information set in advance, show to user.
CN201310479754.5A 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera Active CN103543827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310479754.5A CN103543827B (en) 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310479754.5A CN103543827B (en) 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera

Publications (2)

Publication Number Publication Date
CN103543827A CN103543827A (en) 2014-01-29
CN103543827B true CN103543827B (en) 2016-04-06

Family

ID=49967364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310479754.5A Active CN103543827B (en) 2013-10-14 2013-10-14 Based on the implementation method of the immersion outdoor activities interaction platform of single camera

Country Status (1)

Country Link
CN (1) CN103543827B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656893B (en) * 2015-02-06 2017-10-13 西北工业大学 The long-distance interactive control system and method in a kind of information physical space
CN106816077B (en) * 2015-12-08 2019-03-22 张涛 Interactive sandbox methods of exhibiting based on two dimensional code and augmented reality
CN106293083B (en) * 2016-08-07 2019-06-04 南京仁光电子科技有限公司 A kind of large-screen interactive system and its exchange method
WO2018119676A1 (en) * 2016-12-27 2018-07-05 深圳前海达闼云端智能科技有限公司 Display data processing method and apparatus
CN110521186A (en) * 2017-02-09 2019-11-29 索菲斯研究股份有限公司 For using number, physics, time or the method and system of the shared mixed reality experience of space discovery service
CN106919262A (en) * 2017-03-20 2017-07-04 广州数娱信息科技有限公司 Augmented reality equipment
CN107168619B (en) * 2017-03-29 2023-09-19 腾讯科技(深圳)有限公司 User generated content processing method and device
CN108255304B (en) * 2018-01-26 2022-10-04 腾讯科技(深圳)有限公司 Video data processing method and device based on augmented reality and storage medium
CN111093301B (en) * 2019-12-14 2022-02-25 安琦道尔(上海)环境规划建筑设计咨询有限公司 Light control method and system
CN111223192B (en) * 2020-01-09 2023-10-03 北京华捷艾米科技有限公司 Image processing method, application method, device and equipment thereof
CN112198963A (en) * 2020-10-19 2021-01-08 深圳市太和世纪文化创意有限公司 Immersive tunnel type multimedia interactive display method, equipment and storage medium
US20240169582A1 (en) * 2021-03-08 2024-05-23 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551732A (en) * 2009-03-24 2009-10-07 上海水晶石信息技术有限公司 Method for strengthening reality having interactive function and a system thereof
CN102156808A (en) * 2011-03-30 2011-08-17 北京触角科技有限公司 System and method for improving try-on effect of reality real-time virtual ornament
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1720131B1 (en) * 2005-05-03 2009-04-08 Seac02 S.r.l. An augmented reality system with real marker object identification
US8264505B2 (en) * 2007-12-28 2012-09-11 Microsoft Corporation Augmented reality and filtering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551732A (en) * 2009-03-24 2009-10-07 上海水晶石信息技术有限公司 Method for strengthening reality having interactive function and a system thereof
CN102156808A (en) * 2011-03-30 2011-08-17 北京触角科技有限公司 System and method for improving try-on effect of reality real-time virtual ornament
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality

Also Published As

Publication number Publication date
CN103543827A (en) 2014-01-29

Similar Documents

Publication Publication Date Title
CN103543827B (en) Based on the implementation method of the immersion outdoor activities interaction platform of single camera
CN104219584B (en) Panoramic video exchange method and system based on augmented reality
CN106530894B (en) A kind of virtual head up display method and system of flight training device
KR101841668B1 (en) Apparatus and method for producing 3D model
CN103810685B (en) A kind of super-resolution processing method of depth map
CN110458897B (en) Multi-camera automatic calibration method and system and monitoring method and system
CN103455657B (en) A kind of site work emulation mode based on Kinect and system thereof
CN102801994B (en) Physical image information fusion device and method
CN105225269A (en) Based on the object modelling system of motion
CN104504671A (en) Method for generating virtual-real fusion image for stereo display
CN103226830A (en) Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN106780629A (en) A kind of three-dimensional panorama data acquisition, modeling method
CN107154197A (en) Immersion flight simulator
CN105429989A (en) Simulative tourism method and system for virtual reality equipment
CN105262949A (en) Multifunctional panorama video real-time splicing method
CN103489219B (en) 3D hair style effect simulation system based on depth image analysis
CN104427230A (en) Reality enhancement method and reality enhancement system
CN105183161A (en) Synchronized moving method for user in real environment and virtual environment
CN107134194A (en) Immersion vehicle simulator
CN107256082B (en) Throwing object trajectory measuring and calculating system based on network integration and binocular vision technology
CN106791629A (en) A kind of building based on AR virtual reality technologies builds design system
CN113253842A (en) Scene editing method and related device and equipment
CN117333644A (en) Virtual reality display picture generation method, device, equipment and medium
JP4881178B2 (en) Odometer image generation device and odometer image generation program
CN112927356B (en) Three-dimensional display method for unmanned aerial vehicle image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
CB03 Change of inventor or designer information

Inventor after: Xu Jian

Inventor before: Li Jing

COR Change of bibliographic data
TA01 Transfer of patent application right

Effective date of registration: 20160303

Address after: 210023 Nanjing Vocational College of Information Technology, No.99 Wenlan Road, Nanjing, Jiangsu Province

Applicant after: Xu Jian

Address before: Songshan Road, Jianye District of Nanjing City, Jiangsu province 210000 No. 129 building 7 1106 Wanda Washington Dongyuan

Applicant before: NANJING RONGTU CHUANGSI INFORMATION TECHNOLOGY Co.,Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190513

Address after: 214000 China Sensor Network International Innovation Park E2-417, 200 Linghu Avenue, Taihu International Science Park, Xinwu District, Wuxi City, Jiangsu Province

Patentee after: Wuxi Rong domain Mdt InfoTech Ltd.

Address before: 210023 Nanjing Vocational College of Information Technology, No.99 Wenlan Road, Nanjing, Jiangsu Province

Patentee before: Xu Jian

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210407

Address after: Room 2003-104, building 4, No. 209, Zhuyuan Road, high tech Zone, Suzhou City, Jiangsu Province, 215011

Patentee after: Suzhou miaomi Intelligent Technology Co.,Ltd.

Address before: 214000 China Sensor Network International Innovation Park E2-417, 200 Linghu Avenue, Taihu International Science Park, Xinwu District, Wuxi City, Jiangsu Province

Patentee before: Wuxi Rong domain Mdt InfoTech Ltd.