CN102509348B - Method for showing actual object in shared enhanced actual scene in multi-azimuth way - Google Patents

Method for showing actual object in shared enhanced actual scene in multi-azimuth way Download PDF

Info

Publication number
CN102509348B
CN102509348B CN201110287207.8A CN201110287207A CN102509348B CN 102509348 B CN102509348 B CN 102509348B CN 201110287207 A CN201110287207 A CN 201110287207A CN 102509348 B CN102509348 B CN 102509348B
Authority
CN
China
Prior art keywords
real
world object
augmented reality
reality scene
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110287207.8A
Other languages
Chinese (zh)
Other versions
CN102509348A (en
Inventor
陈小武
赵沁平
金鑫
郭侃侃
郭宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201110287207.8A priority Critical patent/CN102509348B/en
Publication of CN102509348A publication Critical patent/CN102509348A/en
Application granted granted Critical
Publication of CN102509348B publication Critical patent/CN102509348B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for showing an actual object in a shared enhanced actual scene in a multi-azimuth way. The method for showing the actual object in the shared enhanced actual scene in the multi-azimuth way comprises the following steps of: according to characteristics of vertex distribution of a scene area, and according to the coverage area of video sequences to the scene and the number of required apparatuses, optimizing setting ways of multiple video sequences; determining an observation area of each video sequence; matching and determining a shared area between the video sequences according to a characteristic that a standard marker and the dimensions of the video sequences in different azimuths are unchanged; computing a spatial position relationship of all video sequences; finishing false or true registering of each video sequence by using the standard marker; integrating the local descriptions of the video sequences on the actual object, and estimating a three-dimensional convex hull of the actual object according to a computer vision principle; for an entering new cooperative user, giving a rapid registering method according to the spatial position relationship of all video sequences; and drawing a two-dimensional mapping effect of the actual object in the video sequence in each azimuth. The application cost of the method provided by the invention is low, and by the method, a rapid response can be made to the entering new cooperative user.

Description

A kind of multi-faceted method for expressing of real-world object of shared augmented reality scene
Technical field
The present invention relates to computational geometry, image processing and augmented reality field, particularly the method for expressing of real-world object in the multi-faceted video sequence of shared augmented reality scene.
Background technology
In cooperating type augmented reality, different collaborative work users enters in augmented reality scene, in be everlasting perception separately and interaction area, there is different viewpoint and different mutual, in order to make these users can jointly complete the predetermined anti-task of doing, need to set up a shared augmented reality scene, thereby need cooperating type augmented reality system can describe the true environment changing multiple orientation, continuous, set up the scene three-dimensional space model based on many video sequences.Wherein, it is problem in the urgent need to address that real-world object in scene is carried out to multi-faceted expression, specifically comprise and multiple video sequences are set rationally and effectively to cover shared scene, transmit the information that unified each video sequence observes and real-world object in scene is carried out to multi-faceted expression.
In order to show true environment information, the Regenbrecht of Chrysler technical research division of Germany not only configures Helmet Mounted Display for each user, and video camera is set obtains the Global Information of true environment, can support 4 users with upper position to observe same shared augmented reality scene, carry out alternately, carry out various collaborative works with the dummy object in scene, and use true light source die to draw up virtual lighting effect.Although this system has obtained the video information in multiple orientation, these information are not carried out to associated treatment, do not consider the rationality that video sequence arranges yet.
For multiple video sequences, problem is set, Klee has proposed famous gallery problem: a given gallery, need how many video cameras, and where should place them in, can guarantee the summation maximum of all video camera viewing areas.First Chvatal point out, for a polygon that contains n summit, points all in this polygon always can be observed by [n/3] individual video camera.Fisk has proved this theory, first it be decomposed into polygon multiple triangles, then adopt three coloring algorithms to all vertex colorings, guarantee that there is different colors on adjacent summit, the vertex position corresponding to color of dyeing minimum number is exactly the placed point of video camera.
For the Research on Interactive Problem of many video sequences, the people such as the Pilet of the federal Institute of Technology of Lausanne, SUI have proposed to support the method for log-on message complementation and the method for illumination acquisition of information of many video sequences in multiple users' cooperating type augmented reality, if from one of them video cannot complete observation to mark, can utilize the log-on message in other orientation to carry out supplementary copy orientation log-on message, make virtual object by correct being registered on mark, but the impact that the error that this system is not considered to produce in complementary process causes complementary result.
The people such as the Schall of Graz, Austria Polytechnics utilize fiducial marker, have completed splicing and the modeling of large scale scene.Large scale scene is divided into several parts by the method, between every two adjacent parts, placed a series of Datum identifier thing, utilizes this Datum identifier thing can determine the spatial relation of video sequence.First start to measure from being numbered 1 space, when obtaining the information of all collection points in this space and depositing in after computing machine, again next space is measured, the collection point information had living space like this all can obtain, and because the mark of different spaces juncture area is measured to 2 times, after coordinate transformation, can obtain the position relationship in two spaces, so to after whole space measurement, can obtain the position relationship between multiple spaces, and can set up out according to this relation the three-dimensional model of whole scene, the coordinate transformation algorithm of this project is worth using for reference.
The laser 3 of the people such as the Wang Xiyong of Univ Florida USA based on color mark registration is tieed up scan model real-world object is incorporated in virtual scene.First, use a 3D scanner that real-world object sweep trace is expressed as to a dummy model, then, remove the noise of generation, arrange sweep trace, finally the blank between sweep trace is filled.For real-time follow-up real-world object, this system has used color calibration thing to obtain the position of real-world object, calculate thus the position of corresponding dummy model under virtual coordinates, and draw, this system can be good at making user's experience events in virtual scene, but the method adopts instrument more expensive, is only limited to few user.
Can draw by analyzing domestic and international present Research, although there is at present the multi-faceted expression of many tissues or mechanism real-world object in research cooperating type augmented reality system, but also there is the problem of three aspects: first: most is shared augmented reality scene and all seldom considered how to utilize less camera to obtain larger range of observation, its video sequence of all supposing setting can fully obtain needed true environment information, the method to set up of many video sequences is not described; Secondly, after many video sequences are set, existing mechanism less to log-on message complementation study, even if there is this problem of consideration, also seldom consider the impact that error is brought complementary result, error analysis is not fed back in compensating calculation; Finally, in representing towards the real-world object of collaborative work task, whole environment is carried out modeling by many methods, and calculated amount is large, do not consider the needs of sharing information sharing in augmented reality scene.Because each collaboration user has different perception and interaction area, therefore its required real-world object information is often different.
Summary of the invention
The object of the invention is to utilize less camera to obtain larger range of observation, reduce the impact that error is brought complementary result, realize and share information sharing in augmented reality scene.
For this reason, the invention discloses a kind of multi-faceted method for expressing of real-world object of shared augmented reality scene.The multi-faceted method for expressing step of described real-world object is as follows:
Step 1, will to share augmented reality scene in one plane abstract, be formed as polygonized structure, this polygonized structure is cut apart, and calculated the observation station that can cover observation and share the required minimum number of all parts in augmented reality scene according to segmentation result;
Step 2, in shared augmented reality scene, for each observation station arranges at least one Datum identifier thing, determine the observation area of each observation station, calculate the yardstick invariant features vector in the observation area of each observation station, and utilize Datum identifier thing and characteristic matching to determine the shared observation area between each observation station;
Step 3, each observation station are carried out three-dimensional registration under the guide of Datum identifier thing, and obtain thus the position of each observation station;
Step 4, observed result according to two observation stations to the Datum identifier thing in its shared observation area, calculate the spatial relation of these two observation stations;
Step 5, repeating step four, until calculate each observation station and at least one other observation station spatial relation each other;
The real-world object information of modeling is wanted in step 6, crawl, builds silhouettes figure and the disparity map of this real-world object with respect to each observation station;
Step 7, according to silhouettes figure and disparity map, fast creation goes out the real-world object model of arbitrary new observation station.
Preferably, in the multi-faceted modeling method of real-world object of described shared augmented reality scene, in described step 1, be in one plane by the edge projection of sharing augmented reality scene is realized to a horizontal plane by sharing augmented reality scene abstract.
Preferably, in the multi-faceted modeling method of real-world object of described shared augmented reality scene, in described step 1, this polygonized structure is cut apart by triangle split plot design and realized, be divided into multiple triangles in zero lap region each other by this polygonized structure.
Preferably, in the multi-faceted modeling method of real-world object of described shared augmented reality scene, described observation station is position for video camera.
Preferably, in the multi-faceted modeling method of real-world object of described shared augmented reality scene, capturing and wanting the real-world object information of modeling is that mode by manually choosing obtains.
Preferably, in the multi-faceted modeling method of real-world object of described shared augmented reality scene, in described step 6, capturing and wanting the real-world object information of modeling is newly to enter real-world object by detecting in shared augmented reality scene, and captures that the mode of this real-world object obtains.
Preferably, in the multi-faceted modeling method of real-world object of described shared augmented reality scene, in described step 6, building this real-world object is by get two positions for video camera in each observation station with respect to the disparity map of each observation station, measure the depth distance of real-world object by two positions for video camera, to form disparity map.
The invention has the beneficial effects as follows:
1, the present invention is directed to the demand of sharing true environment acquisition of information in augmented reality scene, use fewer object collecting device, guarantee fully to obtain scene environment information simultaneously, significantly to reduce the computing cost of follow-up shared augmented reality scene application.
2, the problem that adopts the mode of log-on message complementation to solve single video sequence cannot to observe all scenes, part has been avoided the possibility of the three-dimensional registration failure of video sequence self simultaneously.
3, for the FAQs of sharing real-world object method for expressing in augmented reality scene, adopt the three-dimensional Convex Hull Method based on vision, in statement real-world object surface point and video sequence spatial relation, control the expansion of quantity of information, can respond fast and newly add collaborative user, meet the needs of sharing augmented reality environment.
Figure of description
Fig. 1 is the modular design figure of the multi-faceted method for expressing of real-world object of a kind of shared augmented reality scene of this invention;
Fig. 2 is the complementary schematic diagram of the log-on message of many video sequences in the multi-faceted method for expressing of real-world object of a kind of shared augmented reality scene of this invention;
Fig. 3 is that in the multi-faceted method for expressing of real-world object of a kind of shared augmented reality scene of this invention, adjacent video sequence is transmitted log-on message schematic diagram;
Fig. 4 is the log-on message conveying flow figure of many video sequences in the multi-faceted method for expressing of real-world object of a kind of shared augmented reality scene of this invention;
Fig. 5 is the schematic diagram that in the multi-faceted method for expressing of real-world object of a kind of shared augmented reality scene of this invention, newly collaborative user's quick three-dimensional is registered.
Embodiment
Below in conjunction with accompanying drawing, the present invention will be further described, so that those of ordinary skills are with reference to implementing according to this after this instructions.
As shown in Figure 1, the multi-faceted method for expressing of the real-world object of a kind of shared augmented reality scene of the present invention, comprises the steps:
Step 1, many video sequences set-point computing module, to share augmented reality scene areas abstract is a plane polygon P, utilize scan-line algorithm, plane polygon is divided into multiple triangles in zero lap region each other, first polygon is divided into multiple monotone polygons, then each monotone polygon is divided into multiple triangles, be decomposed in leg-of-mutton process at polygon, can produce line and connect polygonal different summit, these lines are completely in polygon inside, such line is called as diagonal line, in the time selecting set-point, first calculate the diagonal line number through each summit, and statistics maximal value wherein, using summits maximum diagonal line number as set-point, calculate again the leg-of-mutton set that can observe from this summit, and it is removed from the P of region, upgrade the diagonal line of remaining area simultaneously, until remaining area is zero, in the time there is real-world object in scene P, the point of real-world object inside does not need to be observed, and part scene is caused and blocked, an inner polygon P ' who is now P by the real-world object abstract representation in scene, be one " cavity " of P inside, in initialization, utilize diagonal line to cut apart, the region of P ' is removed from P, guarantee that the interior institute of P-P ' is somewhat observed,
Step 2, as shown in Figure 2, in shared augmented reality scene, for each observation station arranges at least one Datum identifier thing, determine the observation area of each observation station, then use yardstick invariant features to carry out feature point extraction coupling to two width video sequence images, yardstick invariant features is the local feature of image, it is to rotation, yardstick convergent-divergent, brightness variation maintains the invariance, to visual angle change, affined transformation, noise also keeps stability to a certain degree, first generate metric space, based on metric space, carrying out preliminary spatial extrema point detects, then remove unsettled extreme point, for making algorithm there is rotation inconvenience, utilizing the gradient direction distribution characteristic of residue extreme point neighborhood territory pixel is each key point assigned direction parameter, it is finally the feature descriptor that each unique point generates 128 dimensions, when obtaining after the video image in two orientation, generated the feature descriptor of a large amount of points according to the constant algorithm of yardstick, then calculate matching double points according to Euclidean distance matching method, introduce relatively threshold values herein, weigh characteristic matching degree between points, if the ratio of Euclidean distance minimum value and sub-minimum is greater than comparison threshold values, it fails to match, otherwise the match is successful, relatively threshold values is less, matching result is more accurate, the match point logarithm obtaining is fewer,
Step 3, for two the video sequence V (1) and V (2) that have shared region, if can observe certain Datum identifier thing m, find by feature point extraction, V (1), in the video image of V (2), there is certain unique point in region corresponding to m, then carry out Feature Points Matching processing, matching double points is present in the image-region that m is corresponding in a large number, if the number of matching double points is greater than certain threshold values, can think that this marker m is a shared region of V (1) and V (2), when having determined after the shared region of different video sequence, need to select 1 p (now p is under world coordinate system) in shared region, according to projection theory, different video sequence has different camera coordinate systems, because p is to the plane of delineation projection of different video sequence, can produce different what comes into a driver's matrixes, due to p be in shared region a bit, can utilize the hereditary property of p to calculate the spatial relation of different video sequence, therefore establishing p is M to the what comes into a driver's matrix of V (1) 1, it represents that the world coordinate system of ordering from p transforms to the camera coordinate system of V (1), p is M to the what comes into a driver's matrix of V (2) 2, it represents that the world coordinate system of ordering from p transforms to the camera coordinate system of V (2), from the residing camera coordinate system transformation of V (1) to the residing camera coordinate system of V (2), its transformation matrix is M 1 -1* M 2, the position relationship that is video sequence by matrix representation,
Step 4, as shown in Figure 3, at a time, 1, space p is in the range of observation of video sequence V (1), the position relationship of point p itself and V (1) is SP (V (1), p), for video sequence V (2), p may be in the range of observation of V (2), or due to blocking between real-world object, make V (2) can not see p point, therefore cannot directly draw the position relationship SP (V (2) of p and V (2), p), according to the above-mentioned V (1) having determined and the position relationship SP (V (2) of V (2), V (1)), derive SP (V (2), p)=SP (V (2), V (1)) * SP (V (1), p), complementation is divided into the complementation between the complementary and non-adjacent video sequence between contiguous video sequence, now weigh " distance " between video sequence with jumping figure, for two video sequences that have a shared region, this distance just 1 is jumped, according to complementary algorithm, in the time that a certain video sequence V (0) need to obtain the log-on message of certain 1 p in space, V (0) sends request to contiguous video sequence set, if certain video sequence V (i) receives complementary registration request and can observe the log-on message that p is ordered from video sequence V (i-1), send reply to video sequence V (i-1), this reply comprises the demarcation log-on message M of a p to video sequence V (i) pS (i)this reply message will be along the reverse initial inquiry video sequence V (0) that returns of query path, if video sequence V (i) still cannot draw the log-on message that p is ordered, turn to its contiguous video sequence V (i+1) to send new request, in the time that video sequence V (0) obtains complementary registration return information, calculate the log-on message of p point for V (0) according to the spatial relation between video sequence,
Step 5, use background subtracting method compare each two field picture newly obtaining and a background image, obtain the projected outline of real-world object in a certain video sequence, every pair of pixel difference is used to thresholding processing, thereby determine the position that belongs to prospect in every two field picture, prospect is the real-world object that user is concerned about;
Step 6, for a video sequence, in order to calculate the world coordinates value of the point of observing from this orientation, need to obtain corresponding disparity map, first need to be in orientation, video sequence place, two cameras are set and obtain two width images, in image except the region being blocked, the match point that all picture elements can meet with a response in other piece image, then on two width images meet the corresponding sweep trace of polar curve constraint, search for and evaluate and mate pixel sequences, and calculate the horizontal ordinate difference of matching double points, be saved in disparity map this difference as parallax value, after calculating the silhouette of object on image and obtaining the disparity map under this orientation, utilize projection formula to calculate the D coordinates value of the visible real-world object surface point in this orientation under world coordinate system, then utilize binocular stereo vision in each orientation, obtain respectively the outer parameter matrix of left and right image, and calculate the distance b of left and right cameras, preserve the Intrinsic Matrix of video camera based on computing machine scaling method simultaneously, be the point of foreground object by traveling through in silhouettes figure, utilize inside and outside parameter matrix, the parallax value size that b and this point are corresponding, utilize projection formula, by the two-dimensional pixel spot projection of the plane of delineation in camera coordinate system, can obtain corresponding three-dimensional coordinate point, owing to there being the video camera in multiple orientation, therefore, need to be by different camera coordinate system unification to world coordinate systems, for the three-dimensional coordinate point after projection, also need to be transformed in world coordinate system according to the outer parameter information of having tried to achieve, the three-dimensional point set that now each orientation calculates can be unified in a world coordinate system,
Step 7, after calculating the observable real-world object surface point of certain video sequence, what now obtain is that the point of part surface point converges and closes, then represent the surface configuration of real-world object with the three-dimensional convex closure of some cloud: first select four not coplanar points, construct a tetrahedron, then for remaining point, be increased to when in the polyhedron of front construction according to certain order, if the point newly adding is in polyhedron inside, can directly ignore, process next point, if the point newly adding is in polyhedron outside, need to construct new limit and face, add in current polyhedron, and delete those sightless limit and faces, finally obtain the model of real-world object,
Step 8, as shown in Figure 5, for the user who newly enters, first it send initialization requests to all initial users, determine the video sequence that it is contiguous, if there is shared region between video sequence, can be according to a bit calculating the spatial relation between video sequence in shared region, and preserve this positional information, if there is not shared region between video sequence, the video sequence in other orientation is passed null value back by Internet Transmission, after new user has determined the left and right video sequence the most contiguous with it, send request to these two video sequences, obtain these two existing real-world objects in orientation and represent information, what this stylish orientation obtained is still the point set of real-world object surface point, the data of multiple points that can return two orientation are carried out fusion treatment, obtain the model of new real-world object.

Claims (6)

1. the multi-faceted modeling method of the real-world object of shared augmented reality scene, is characterized in that, comprises the following steps:
Step 1, will to share augmented reality scene in one plane abstract, be formed as polygonized structure, this polygonized structure is cut apart, and calculated the observation station that can cover observation and share the required minimum number of all parts in augmented reality scene according to segmentation result;
Wherein, this polygonized structure is cut apart by triangle split plot design and realized, be divided into multiple triangles in zero lap region each other by this polygonized structure;
Be decomposed in leg-of-mutton process at polygon, can produce line and connect polygonal different summit, these lines are completely in polygon inside, such line is called as diagonal line, in the time selecting the observation station arranging, first calculate the diagonal line number through each summit, and statistics maximal value wherein, using summits maximum diagonal line number as the observation station arranging; Calculate again the leg-of-mutton set that can observe from this summit, and it is removed from polygonized structure, upgrade the diagonal line of remaining area simultaneously, until remaining area is zero;
Step 2, in shared augmented reality scene, for each observation station arranges at least one Datum identifier thing, determine the observation area of each observation station, calculate the yardstick invariant features vector in the observation area of each observation station, and utilize Datum identifier thing and characteristic matching to determine the shared observation area between each observation station;
Step 3, each observation station are carried out three-dimensional registration under the guide of Datum identifier thing;
Step 4, observed result according to two observation stations to the Datum identifier thing in its shared observation area, calculate the spatial relation of these two observation stations;
Step 5, repeating step four, until calculate each observation station and at least one other observation station spatial relation each other;
The real-world object information of modeling is wanted in step 6, crawl, builds silhouettes figure and the disparity map of this real-world object with respect to each observation station;
Step 7, according to silhouettes figure and disparity map, fast creation goes out the real-world object model of arbitrary new observation station.
2. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1, it is characterized in that, in described step 1, be in one plane by the edge projection of sharing augmented reality scene is realized to a horizontal plane by sharing augmented reality scene abstract.
3. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1, is characterized in that, described observation station is position for video camera.
4. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1, is characterized in that, capturing and wanting the real-world object information of modeling is that mode by manually choosing obtains.
5. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1, it is characterized in that, in described step 6, capturing and wanting the real-world object information of modeling is newly to enter real-world object by detecting in shared augmented reality scene, and captures that the mode of this real-world object obtains.
6. the multi-faceted modeling method of the real-world object of shared augmented reality scene as claimed in claim 1, it is characterized in that, in described step 6, building this real-world object is by get two positions for video camera in each observation station with respect to the disparity map of each observation station, measure the depth distance of real-world object by two positions for video camera, to form disparity map.
CN201110287207.8A 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way Expired - Fee Related CN102509348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110287207.8A CN102509348B (en) 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110287207.8A CN102509348B (en) 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way

Publications (2)

Publication Number Publication Date
CN102509348A CN102509348A (en) 2012-06-20
CN102509348B true CN102509348B (en) 2014-06-25

Family

ID=46221425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110287207.8A Expired - Fee Related CN102509348B (en) 2011-09-26 2011-09-26 Method for showing actual object in shared enhanced actual scene in multi-azimuth way

Country Status (1)

Country Link
CN (1) CN102509348B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3330928A4 (en) * 2015-07-28 2019-02-27 Hitachi, Ltd. Image generation device, image generation system, and image generation method

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8810598B2 (en) 2011-04-08 2014-08-19 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
CN103366610B (en) * 2013-07-03 2015-07-22 央数文化(上海)股份有限公司 Augmented-reality-based three-dimensional interactive learning system and method
US9582516B2 (en) 2013-10-17 2017-02-28 Nant Holdings Ip, Llc Wide area augmented reality location-based services
CN104657568B (en) * 2013-11-21 2017-10-03 深圳先进技术研究院 Many people's moving game system and methods based on intelligent glasses
CN103617317B (en) * 2013-11-26 2017-07-11 Tcl集团股份有限公司 The autoplacement method and system of intelligent 3D models
US10593113B2 (en) * 2014-07-08 2020-03-17 Samsung Electronics Co., Ltd. Device and method to display object with visual effect
CN104596502B (en) * 2015-01-23 2017-05-17 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN106984043B (en) * 2017-03-24 2020-08-07 武汉秀宝软件有限公司 Data synchronization method and system for multiplayer battle game
EP3435250A1 (en) * 2017-07-27 2019-01-30 Vestel Elektronik Sanayi ve Ticaret A.S. Method, apparatus and computer program for overlaying a web page on a 3d object
CN108564661B (en) * 2018-01-08 2022-06-28 佛山市超体软件科技有限公司 Recording method based on augmented reality scene
CN111882516B (en) * 2020-02-19 2023-07-07 南京信息工程大学 Image quality evaluation method based on visual saliency and deep neural network
CN113223186B (en) * 2021-07-07 2021-10-15 江西科骏实业有限公司 Processing method, equipment, product and device for realizing augmented reality
CN113920027B (en) * 2021-10-15 2023-06-13 中国科学院光电技术研究所 Sequence image rapid enhancement method based on two-way projection
CN114663624B (en) * 2022-03-14 2024-07-26 东南大学 Spatial range sensing method based on augmented reality
CN115633248B (en) * 2022-12-22 2023-03-31 浙江宇视科技有限公司 Multi-scene cooperative detection method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2044975B1 (en) * 2007-10-01 2012-02-22 BrainLAB AG Method for registering 2D image data, computer program product, navigation method for navigating a treatment device in the medical field and computer device for registering 2D image data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences;Xin Jin, et al;《2011 Sixth Annual Chinagrid Conference (ChinaGrid)》;20110823;第234-240页 *
DieterSchmalstieg et al.Managing Complex Augmented Reality Models.《Computer Graphics and Applications
Managing Complex Augmented Reality Models;Dieter Schmalstieg, et al;《Computer Graphics and Applications, IEEE》;20070831;第27卷(第4期);第48-57页 *
Xin Jin, et al.Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences.《2011 Sixth Annual Chinagrid Conference (ChinaGrid)》.2011,第234-240页.
赵沁平.虚拟现实综述.《中国科学 F辑:信息科学》.2009,第39卷(第1期),第2-46页. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3330928A4 (en) * 2015-07-28 2019-02-27 Hitachi, Ltd. Image generation device, image generation system, and image generation method

Also Published As

Publication number Publication date
CN102509348A (en) 2012-06-20

Similar Documents

Publication Publication Date Title
CN102509348B (en) Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN110782524B (en) Indoor three-dimensional reconstruction method based on panoramic image
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
US20190311471A1 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
CN102938844B (en) Three-dimensional imaging is utilized to generate free viewpoint video
CN103955920B (en) Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN108053476B (en) Human body parameter measuring system and method based on segmented three-dimensional reconstruction
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN110458897B (en) Multi-camera automatic calibration method and system and monitoring method and system
CN102509343B (en) Binocular image and object contour-based virtual and actual sheltering treatment method
WO2021203883A1 (en) Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
CN111060924B (en) SLAM and target tracking method
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
CN110889873A (en) Target positioning method and device, electronic equipment and storage medium
Rothermel et al. Potential of dense matching for the generation of high quality digital elevation models
Jin et al. An Indoor Location‐Based Positioning System Using Stereo Vision with the Drone Camera
CN112184793B (en) Depth data processing method and device and readable storage medium
CN110349249A (en) Real-time dense method for reconstructing and system based on RGB-D data
CN114543787B (en) Millimeter-scale indoor map positioning method based on fringe projection profilometry
CN109085603A (en) Optical 3-dimensional imaging system and color three dimensional image imaging method
CN114140539A (en) Method and device for acquiring position of indoor object
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values
Liu et al. The applications and summary of three dimensional reconstruction based on stereo vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140625