CN106548519A - Augmented reality method based on ORB SLAM and the sense of reality of depth camera - Google Patents

Augmented reality method based on ORB SLAM and the sense of reality of depth camera Download PDF

Info

Publication number
CN106548519A
CN106548519A CN201610963053.2A CN201610963053A CN106548519A CN 106548519 A CN106548519 A CN 106548519A CN 201610963053 A CN201610963053 A CN 201610963053A CN 106548519 A CN106548519 A CN 106548519A
Authority
CN
China
Prior art keywords
camera
depth
slam
real
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610963053.2A
Other languages
Chinese (zh)
Inventor
应高选
张剑华
钱胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xuan Cai & Network Technology Co Ltd
Original Assignee
Shanghai Xuan Cai & Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xuan Cai & Network Technology Co Ltd filed Critical Shanghai Xuan Cai & Network Technology Co Ltd
Priority to CN201610963053.2A priority Critical patent/CN106548519A/en
Publication of CN106548519A publication Critical patent/CN106548519A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

A kind of augmented reality method based on ORB SLAM and the sense of reality of depth camera, comprises the following steps:1), initial coordinate plane is determined using Marker less technologies, and initialize the slam coordinate systems with true yardstick;2) map, is imported, according to the characteristic point extracted, the direct map for representing and characteristics map is combined;3), monocular SLAM builds true yardstick map, and preserves map;4) dummy object is placed in the scene, and the distance between dummy object and camera are calculated by d engine;5) depth camera, is utilized, scene depth data are obtained;Dummy object range data according to obtaining in d engine carries out depth integration, so as to reach the effect blocked, allows real object and dummy model directly to have very strong interaction sense.The present invention has very strong robustness, realizes that real-world object and virtual data can block interaction.

Description

Augmented reality method based on ORB-SLAM and the sense of reality of depth camera
Technical field
Arrive the invention belongs to SLAM be perceived environmental information technology and obtains scene three-dimensional information technological incorporation with depth camera AR (Augmented Reality, augmented reality) technical field, is related to a kind of augmented reality method of sense of reality.
Background technology
SLAM (simultaneous localization and mapping are positioned and map structuring immediately) is referred to Robot creates map under the conditions of self-position is uncertain, in complete graphics communication, while being carried out independently using map Positioning and navigation.ORB-SLAM(a real-time accurate monocular SLAM system based on ORB Features, the SLAM systems based on ORB characteristic points), ORB refers to a kind of rotational invariance feature, and whole algorithm is base Realize in ORB features, the progress of newest ORB-SLAM is to have done half dense scene rebuilding based on the key frame of ORB-SLAM. Augmented reality be by computer system provide Information application to real world, and by computer generation dummy object, field Scape or system prompt information superposition in real scene, so as to realize to reality enhancing.AR technologies can be applicable to medical treatment, army The numerous areas such as thing, industrial maintenance, Entertainment.
The mainstream product for being currently based on SLAM technological development has the Hololens of the Microsoft and Project of Google Tango.The HoloLens of Microsoft has used AR technologies, and one layer of virtual image is superimposed in real world.It is worldwide in the recent period Pokemon Go play, and present broad mass market of the AR technologies in game application, have also annotated SLAM from another angle Importance of the technology in future games.As the popularization of AR concepts and the demand of correlation are gradually apparent, SLAM algorithms are It is considered as that one of optimal choice of real world is understood in AR technologies, increasing AR is using by the performance of SLAM class algorithms As the important references index evaluated.
But deficiency and excess still suffers from certain defect with reference to aspect in current AR systems, being mainly manifested in dummy object cannot be very Good is fused in real scene.As AR systems are all often to be carried out by two-dimensional video by the real scene that camera is seen Represent, three-dimensional virtual object is placed on before video forever.Even if user want to allow three-dimensional virtual object exist with two-dimensional video Between two real-world objects, but when there are other objects between this real-world object and dummy object, user still can feel Outside real scene is swum in dummy object.Cause being primarily due in actual object positioned at virtual mould for this phenomenon When before type, it is impossible to which the function of dummy model is blocked in realization.
The content of the invention
In order to overcome existing AR technologies accurately calculate object dimensional information, cause to block object and correctly can not be managed Solution and construct, the poor deficiency of robustness, the present invention provides a kind of robustness preferably based on ORB-SLAM and depth camera The augmented reality method of sense of reality, by the three-dimensional environment of SLAM perceptual objects, depth camera obtains scene distance information, will be true Object and dummy model depth integration, reach " parallax " effect as human eye so that gesture recognition, visual angle tracking, scene Rebuild more accurate, so as to realize that real-world object can block the function of dummy model.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of augmented reality method based on ORB-SLAM and the sense of reality of depth camera, methods described include following step Suddenly:
1), determine that initial coordinate is put down using a kind of Marker-less (augmented reality based on a small amount of labelling) technology Face, and initialize the slam coordinate systems with true yardstick;Marker-less is utilized by first extracting orb characteristic points Knnmatch (K arest neighbors matching algorithms) is matched, and is tried to achieve homography matrix, further according to perspective transform, is tried to achieve four summits Coordinate, then try to achieve rotation translation matrix;
2) map, is imported, according to the characteristic point extracted, the direct map for representing and characteristics map is combined.Carry out SLAM location navigations, pass to the position of virtual camera and attitude in d engine real camera position and attitude;
3), monocular SLAM builds true yardstick map, and preserves map;
4) dummy object is placed in the scene, and the distance between dummy object and camera are calculated by d engine;
5) depth camera, is utilized, scene depth data are obtained;According to the dummy object range data obtained in d engine Depth integration is carried out, so as to reach the effect blocked, allows real object and dummy model that directly there is very strong interaction sense.
Further, the step 1) in, Marker-less is a tetragon, and polygonal approximation result meets following bar Part:
1.1) there was only four summits;
1.2) must be convex polygon;
1.3) length on each side can not be too small.
1.4) because the yardstick of marker pictures can be learnt in advance, then slam is built using his full-size(d) Coordinate system there is real yardstick;
By conditions above, most of profile is excluded, find the most possible part for Marker-less, find so Candidate contours after, four summits of its polygon are preserved, and are adjusted, then from these candidate regions further Real Marker-less is filtered out, perspective transform will be carried out to candidate regions, be obtained the front view of Marker-less;
After obtaining the coordinate on four summits, rotation translation matrix is tried to achieve according to formula x=K [R | T] X, wherein:X is space The coordinate of certain point, relative to world coordinate system, [R | T] is to join matrix outside video camera to the coordinate, for the world coordinates for putting certain Camera coordinates are transformed to, K is video camera internal reference, for by the image plane of certain spot projection in camera coordinates, x as throws The pixel coordinate of movie queen.
Further, the step 3) in, environment is perceived by photographic head, the information to obtaining is analyzed, and extracts ring Feature in border is simultaneously preserved, and is set up environmental map, i.e., by the method for probability statistics, is matched to reach positioning by multiple features.
Further, the step 4) in, calculated between three-dimensional virtual object and camera by Unity3D d engines Real-time deep information.
The step 5) in, data fusion process is as follows:
5.1) the real depth data of present frame are obtained using depth camera;
5.2) because slam is calculated the coordinate system of true yardstick, after being applied to d engine, d engine it is virtual Coordinate system can be with the reasonable coincidence of the coordinate system of real world, and obtained from, the depth data of virtual camera also has true Real yardstick;
5.3) real depth data and virtual depth data are carried out into a depth ratio compared with depth value is before model Point corresponding to the point of cromogram be just rendered into before model, and cromogram corresponding to point of the depth value behind the model Point is just rendered into behind model.
Beneficial effects of the present invention are mainly manifested in:With very strong robustness, strenuous exercise's figure can be processed well As, have than larger leeway unrestrained section closed loop control, reorientation, even full-automatic position initialization;With reference to depth camera, will The virtual depth data obtained in the truthful data and d engine of camera acquisition carry out depth integration, realize real-world object and void Intend data and can block interaction so that AR scenes have more real experience and utilization.
Specific embodiment
The invention will be further described below.
A kind of augmented reality method based on ORB-SLAM and the sense of reality of depth camera, comprises the following steps:
1) initial coordinate plane is determined using Marker-less technologies, extract ORB characteristic points, try to achieve rotation translation matrix, And initialize the slam coordinate systems with true yardstick.
As Marker-less is a tetragon, its polygonal approximation result should meet following condition:
1.1) there was only four summits;
1.2) must be convex polygon;
1.3) length on each side can not be too small.
1.4) because the yardstick of marker pictures can be learnt in advance, then just can be caused using his full-size(d) The coordinate system that slam builds has real yardstick.
By more than, several conditions, can exclude most of profile, so as to find the most possible portion for Marker-less Point, after finding such candidate contours, four summits of its polygon are preserved, and does appropriate adjustment, then from this Real Marker-less is filtered out further in a little candidate regions.The information in Marker-less is extracted for convenience, Perspective transform is carried out to candidate regions, the front view of Marker-less is obtained.
Know the coordinate on four summits, rotation translation matrix is tried to achieve according to formula x=K [R | T] X.Wherein:X is space The coordinate (relative to world coordinate system) of point, [R | T] it is outside video camera, to join matrix, the world coordinate transformation for certain is put is to take the photograph Camera coordinate, K are video camera internal references, for the picture by the image plane of certain spot projection in camera coordinates, after x as projections Plain coordinate;
2) SLAM location navigations.
According to the characteristic point extracted, the direct map for representing is combined with the characteristics map for building, in constructing environment The track of camera is estimated simultaneously, and the position of virtual camera and attitude in d engine are passed in real camera position and attitude;
3) monocular SLAM builds true yardstick map.
Environment is perceived by vision (photographic head), the information to obtaining is analyzed, and the feature in extraction environment is simultaneously preserved, Set up environmental map.The ultimate principle of SLAM is the method by probability statistics, matches to reach positioning and subtract by multiple features Few position error.Vision SLAM method passes through observation model and completes accurate interframe movement parameter estimation, reduces algorithm high The restriction of complexity is required;
4) according to design requirement, three-dimensional virtual object is placed on into correct position, is commonly held within the 3rd step and is determined Plan-position.Real-time deep information between three-dimensional virtual object and camera is calculated by Unity3D d engines.Can lead to Cross function " UNITY_SAMPLE_DEPTH " and obtain depth information, but this depth information is non-linear depth information, it is impossible to be straight Connect and merge with real scene depth information, in addition it is also necessary to dummy object linear depth is obtained by processing.Process false code is as follows:
4.1) pass through _ DepthPower sets the parameter that non-linear depth is transformed into linear depth;
4.2) non-linear depth d of dummy object is calculated using function UNITY_SAMPLE_DEPTH;
4.3) using function pow and 4.1) in parameter d is converted to into linear depth;
5) depth camera is utilized, depth data is mapped in true environment, reach dummy object mutual with real-world object The effect blocked.
Depth camera can be rapidly completed the identification to target and follow the trail of, and can be obtained between object more by range information Abundant position relationship, that is, distinguish prospect and background, obtains depth data in scene in real time.Then obtain in d engine The scene depth data that dummy object depth data is obtained with camera are merged, and realize phase between dummy object and real-world object The effect mutually blocked, so that AR scenes have higher sense of reality.
Data fusion step:
5.1) the real depth data of present frame are obtained using depth camera;
5.2) because slam is calculated the coordinate system of true yardstick, after being applied to d engine, d engine it is virtual Coordinate system can be with the reasonable coincidence of the coordinate system of real world, and obtained from, the depth data of virtual camera also has true Real yardstick;
5.3) real depth data and virtual depth data are carried out into a depth ratio compared with depth value is before model Point corresponding to the point of cromogram be just rendered into before model, and cromogram corresponding to point of the depth value behind the model Point is just rendered into behind model.

Claims (5)

1. a kind of augmented reality method based on ORB SLAM and the sense of reality of depth camera, it is characterised in that:Methods described bag Include following steps:
1), initial coordinate plane is determined using Marker-less technologies, and initialize the slam coordinate systems with true yardstick; Marker-less is matched using knnmatch, is tried to achieve homography matrix by first extracting orb characteristic points, is become further according to perspective Change, try to achieve the coordinate on four summits, then try to achieve rotation translation matrix;
2) map, is imported, according to the characteristic point extracted, the direct map for representing and characteristics map is combined, SLAM is carried out fixed Position navigation, passes to the position of virtual camera and attitude in d engine real camera position and attitude;
3), monocular SLAM builds true yardstick map, and preserves map;
4) dummy object is placed in the scene, and the distance between dummy object and camera are calculated by d engine;
5) depth camera, is utilized, scene depth data are obtained;Dummy object range data according to obtaining in d engine is carried out Depth integration, so as to reach the effect blocked, allows real object and dummy model directly to have very strong interaction sense.
2. the augmented reality method based on ORB SLAM and the sense of reality of depth camera as claimed in claim 1, its feature exist In:The step 1) in, Marker-less is a tetragon, and polygonal approximation result meets following condition:
1.1) there was only four summits;
1.2) must be convex polygon;
1.3) length on each side can not be too small;
1.4) because the yardstick of marker pictures can be learnt in advance, then cause the seat of slam structures using his full-size(d) Mark system is with real yardstick;
By conditions above, most of profile is excluded, find the most possible part for Marker-less, find such time After selecting profile, four summits of its polygon are preserved, and is adjusted, then further screened from these candidate regions Go out real Marker-less, perspective transform will be carried out to candidate regions, obtain the front view of Marker-less;
After obtaining the coordinate on four summits, rotation translation matrix is tried to achieve according to formula x=K [R | T] X, wherein:X is space point Coordinate, relative to world coordinate system, [R | T] is to join matrix outside video camera to the coordinate, for the world coordinate transformation for putting certain For camera coordinates, K is video camera internal reference, for by the image plane of certain spot projection in camera coordinates, x is after projecting Pixel coordinate.
3. the augmented reality method based on ORB SLAM and the sense of reality of depth camera as claimed in claim 1 or 2, its feature It is:The step 3) in, environment is perceived by photographic head, the information to obtaining is analyzed, and the feature in extraction environment is simultaneously Preserve, set up environmental map, i.e., by the method for probability statistics, match to reach positioning by multiple features.
4. the augmented reality method based on ORB SLAM and the sense of reality of depth camera as claimed in claim 1 or 2, its feature It is:The step 4) in, the real-time deep letter between three-dimensional virtual object and camera is calculated by Unity3D d engines Breath.
5. the augmented reality method based on ORB SLAM and the sense of reality of depth camera as claimed in claim 1 or 2, its feature It is:The step 5) in, data fusion process is as follows:
5.1) the real depth data of present frame are obtained using depth camera;
5.2) because slam is calculated the coordinate system of true yardstick, after being applied to d engine, the virtual coordinates of d engine System can be with the reasonable coincidence of the coordinate system of real world, and obtained from, the depth data of virtual camera also has real Yardstick;
5.3) real depth data and virtual depth data are carried out into a depth ratio compared with point of the depth value before model The point of corresponding cromogram is just rendered into before model, and the point of the cromogram corresponding to point of the depth value behind the model is just It is rendered into behind model.
CN201610963053.2A 2016-11-04 2016-11-04 Augmented reality method based on ORB SLAM and the sense of reality of depth camera Pending CN106548519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610963053.2A CN106548519A (en) 2016-11-04 2016-11-04 Augmented reality method based on ORB SLAM and the sense of reality of depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610963053.2A CN106548519A (en) 2016-11-04 2016-11-04 Augmented reality method based on ORB SLAM and the sense of reality of depth camera

Publications (1)

Publication Number Publication Date
CN106548519A true CN106548519A (en) 2017-03-29

Family

ID=58394247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610963053.2A Pending CN106548519A (en) 2016-11-04 2016-11-04 Augmented reality method based on ORB SLAM and the sense of reality of depth camera

Country Status (1)

Country Link
CN (1) CN106548519A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107185245A (en) * 2017-05-31 2017-09-22 武汉秀宝软件有限公司 A kind of actual situation synchronous display method and system based on SLAM technologies
CN107204015A (en) * 2017-05-27 2017-09-26 中山大学 Instant positioning based on color image and infrared image fusion is with building drawing system
CN107564107A (en) * 2017-07-19 2018-01-09 中国农业大学 A kind of design method and equipment of augmented reality implementation tool
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107766855A (en) * 2017-10-25 2018-03-06 南京阿凡达机器人科技有限公司 Chess piece localization method, system, storage medium and robot based on machine vision
CN107958466A (en) * 2017-12-01 2018-04-24 大唐国信滨海海上风力发电有限公司 A kind of tracking of the Slam algorithm optimizations based on model
CN108109208A (en) * 2017-12-01 2018-06-01 同济大学 A kind of marine wind electric field augmented reality method
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108537844A (en) * 2018-03-16 2018-09-14 上海交通大学 A kind of vision SLAM winding detection methods of fusion geological information
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN108596964A (en) * 2018-05-02 2018-09-28 厦门美图之家科技有限公司 Depth data acquisition methods, device and readable storage medium storing program for executing
CN108735052A (en) * 2018-05-09 2018-11-02 北京航空航天大学青岛研究院 A kind of augmented reality experiment with falling objects method based on SLAM
CN108805985A (en) * 2018-03-23 2018-11-13 福建数博讯信息科技有限公司 Virtual Space method and apparatus
CN108876900A (en) * 2018-05-11 2018-11-23 重庆爱奇艺智能科技有限公司 A kind of virtual target projective techniques merged with reality scene and system
CN108905208A (en) * 2018-06-21 2018-11-30 珠海金山网络游戏科技有限公司 A kind of electronic gaming method and device based on augmented reality
CN109471521A (en) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Virtual and real shielding interaction method and system in AR environment
CN109636916A (en) * 2018-07-17 2019-04-16 北京理工大学 A kind of a wide range of virtual reality roaming system and method for dynamic calibration
CN109636852A (en) * 2018-11-23 2019-04-16 浙江工业大学 A kind of monocular SLAM initial method
CN110009732A (en) * 2019-04-11 2019-07-12 司岚光电科技(苏州)有限公司 Based on GMS characteristic matching towards complicated large scale scene three-dimensional reconstruction method
CN110120098A (en) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 Scene size estimation and augmented reality control method, device and electronic equipment
CN110599432A (en) * 2018-06-12 2019-12-20 光宝电子(广州)有限公司 Image processing system and image processing method
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN110827411A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Self-adaptive environment augmented reality model display method, device, equipment and storage medium
CN110910484A (en) * 2019-12-03 2020-03-24 上海世长信息科技有限公司 SLAM-based object mapping method from two-dimensional image to three-dimensional real scene
CN111815696A (en) * 2019-04-11 2020-10-23 曜科智能科技(上海)有限公司 Depth map optimization method, device, equipment and medium based on semantic instance segmentation
US10853649B2 (en) * 2018-08-27 2020-12-01 Dell Products, L.P. Context-aware hazard detection using world-facing cameras in virtual, augmented, and mixed reality (xR) applications
CN112258633A (en) * 2020-10-23 2021-01-22 华中科技大学鄂州工业技术研究院 High-precision scene reconstruction method and device based on SLAM technology
CN112308980A (en) * 2020-10-30 2021-02-02 脸萌有限公司 Augmented reality interactive display method and equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HAOMIN LIU ET AL: "Robust Keyframe-based Monocular SLAM for Augmented Reality", 《2016 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY》 *
刘浩敏 等: "基于单目视觉的同时定位与地图构建方法综述", 《计算机辅助设计与图形学学报》 *
李红波 等: "动态变换背景帧的虚实遮挡处理方法", 《计算机工程与设计》 *
饶玲珊 等: "增强现实游戏的场景重建和运动物体跟踪技术", 《计算机工程与应用》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204015A (en) * 2017-05-27 2017-09-26 中山大学 Instant positioning based on color image and infrared image fusion is with building drawing system
CN107204015B (en) * 2017-05-27 2021-06-08 中山大学 Instant positioning and mapping system based on color image and infrared image fusion
CN107185245A (en) * 2017-05-31 2017-09-22 武汉秀宝软件有限公司 A kind of actual situation synchronous display method and system based on SLAM technologies
CN107564107A (en) * 2017-07-19 2018-01-09 中国农业大学 A kind of design method and equipment of augmented reality implementation tool
CN107564089B (en) * 2017-08-10 2022-03-01 腾讯科技(深圳)有限公司 Three-dimensional image processing method, device, storage medium and computer equipment
CN107564089A (en) * 2017-08-10 2018-01-09 腾讯科技(深圳)有限公司 Three dimensional image processing method, device, storage medium and computer equipment
CN107766855A (en) * 2017-10-25 2018-03-06 南京阿凡达机器人科技有限公司 Chess piece localization method, system, storage medium and robot based on machine vision
CN107766855B (en) * 2017-10-25 2021-09-07 南京阿凡达机器人科技有限公司 Chessman positioning method and system based on machine vision, storage medium and robot
CN108109208A (en) * 2017-12-01 2018-06-01 同济大学 A kind of marine wind electric field augmented reality method
CN107958466B (en) * 2017-12-01 2022-03-29 大唐国信滨海海上风力发电有限公司 Slam algorithm optimization model-based tracking method
CN107958466A (en) * 2017-12-01 2018-04-24 大唐国信滨海海上风力发电有限公司 A kind of tracking of the Slam algorithm optimizations based on model
CN108109208B (en) * 2017-12-01 2022-02-08 同济大学 Augmented reality method for offshore wind farm
CN110120098B (en) * 2018-02-05 2023-10-13 浙江商汤科技开发有限公司 Scene scale estimation and augmented reality control method and device and electronic equipment
CN110120098A (en) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 Scene size estimation and augmented reality control method, device and electronic equipment
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108537844A (en) * 2018-03-16 2018-09-14 上海交通大学 A kind of vision SLAM winding detection methods of fusion geological information
CN108537844B (en) * 2018-03-16 2021-11-26 上海交通大学 Visual SLAM loop detection method fusing geometric information
CN108805985B (en) * 2018-03-23 2022-02-15 福建数博讯信息科技有限公司 Virtual space method and device
CN108805985A (en) * 2018-03-23 2018-11-13 福建数博讯信息科技有限公司 Virtual Space method and apparatus
WO2019184889A1 (en) * 2018-03-26 2019-10-03 Oppo广东移动通信有限公司 Method and apparatus for adjusting augmented reality model, storage medium, and electronic device
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN108596964A (en) * 2018-05-02 2018-09-28 厦门美图之家科技有限公司 Depth data acquisition methods, device and readable storage medium storing program for executing
CN108735052A (en) * 2018-05-09 2018-11-02 北京航空航天大学青岛研究院 A kind of augmented reality experiment with falling objects method based on SLAM
CN108876900A (en) * 2018-05-11 2018-11-23 重庆爱奇艺智能科技有限公司 A kind of virtual target projective techniques merged with reality scene and system
CN110599432A (en) * 2018-06-12 2019-12-20 光宝电子(广州)有限公司 Image processing system and image processing method
CN110599432B (en) * 2018-06-12 2023-02-24 光宝电子(广州)有限公司 Image processing system and image processing method
CN108905208A (en) * 2018-06-21 2018-11-30 珠海金山网络游戏科技有限公司 A kind of electronic gaming method and device based on augmented reality
CN109636916A (en) * 2018-07-17 2019-04-16 北京理工大学 A kind of a wide range of virtual reality roaming system and method for dynamic calibration
CN109636916B (en) * 2018-07-17 2022-12-02 北京理工大学 Dynamic calibration large-range virtual reality roaming system and method
CN110827376A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Augmented reality multi-plane model animation interaction method, device, equipment and storage medium
CN110827411A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Self-adaptive environment augmented reality model display method, device, equipment and storage medium
US10853649B2 (en) * 2018-08-27 2020-12-01 Dell Products, L.P. Context-aware hazard detection using world-facing cameras in virtual, augmented, and mixed reality (xR) applications
CN109471521A (en) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Virtual and real shielding interaction method and system in AR environment
CN109636852A (en) * 2018-11-23 2019-04-16 浙江工业大学 A kind of monocular SLAM initial method
CN111815696A (en) * 2019-04-11 2020-10-23 曜科智能科技(上海)有限公司 Depth map optimization method, device, equipment and medium based on semantic instance segmentation
CN110009732A (en) * 2019-04-11 2019-07-12 司岚光电科技(苏州)有限公司 Based on GMS characteristic matching towards complicated large scale scene three-dimensional reconstruction method
CN111815696B (en) * 2019-04-11 2023-08-22 曜科智能科技(上海)有限公司 Depth map optimization method, device, equipment and medium based on semantic instance segmentation
CN110009732B (en) * 2019-04-11 2023-10-03 司岚光电科技(苏州)有限公司 GMS feature matching-based three-dimensional reconstruction method for complex large-scale scene
CN110910484A (en) * 2019-12-03 2020-03-24 上海世长信息科技有限公司 SLAM-based object mapping method from two-dimensional image to three-dimensional real scene
CN112258633A (en) * 2020-10-23 2021-01-22 华中科技大学鄂州工业技术研究院 High-precision scene reconstruction method and device based on SLAM technology
CN112258633B (en) * 2020-10-23 2023-02-28 华中科技大学鄂州工业技术研究院 SLAM technology-based scene high-precision reconstruction method and device
CN112308980A (en) * 2020-10-30 2021-02-02 脸萌有限公司 Augmented reality interactive display method and equipment

Similar Documents

Publication Publication Date Title
CN106548519A (en) Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN105631861B (en) Restore the method for 3 D human body posture from unmarked monocular image in conjunction with height map
CN106228538B (en) Binocular vision indoor orientation method based on logo
Hagbi et al. Shape recognition and pose estimation for mobile augmented reality
CN109671120A (en) A kind of monocular SLAM initial method and system based on wheel type encoder
CN101681423B (en) Method of capturing, processing, and rendering images
US20170193710A1 (en) System and method for generating a mixed reality environment
CN103177269B (en) For estimating the apparatus and method of object gesture
US8675972B2 (en) Method and device for determining the pose of a three-dimensional object in an image and method and device for creating at least one key image for object tracking
CN107990899A (en) A kind of localization method and system based on SLAM
CN105528082A (en) Three-dimensional space and hand gesture recognition tracing interactive method, device and system
KR20090110357A (en) Augmented reality method and devices using a real time automatic tracking of marker-free textured planar geometrical objects in a video stream
CN103994765B (en) Positioning method of inertial sensor
CN108955718A (en) A kind of visual odometry and its localization method, robot and storage medium
Tulyakov et al. Robust real-time extreme head pose estimation
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
WO2021002025A1 (en) Skeleton recognition method, skeleton recognition program, skeleton recognition system, learning method, learning program, and learning device
CN109978919A (en) A kind of vehicle positioning method and system based on monocular camera
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
CN111784775A (en) Identification-assisted visual inertia augmented reality registration method
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
CN109584347A (en) A kind of augmented reality mutual occlusion processing method based on active apparent model
CN107101632A (en) Space positioning apparatus and method based on multi-cam and many markers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170329