CN112116631A - Industrial augmented reality combined positioning system - Google Patents

Industrial augmented reality combined positioning system Download PDF

Info

Publication number
CN112116631A
CN112116631A CN202010926641.5A CN202010926641A CN112116631A CN 112116631 A CN112116631 A CN 112116631A CN 202010926641 A CN202010926641 A CN 202010926641A CN 112116631 A CN112116631 A CN 112116631A
Authority
CN
China
Prior art keywords
helmet
pose
base station
camera
positioning system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010926641.5A
Other languages
Chinese (zh)
Inventor
兰卫旗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Ruike Technology Co ltd
Original Assignee
Jiangsu Ruike Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Ruike Technology Co ltd filed Critical Jiangsu Ruike Technology Co ltd
Priority to CN202010926641.5A priority Critical patent/CN112116631A/en
Priority to PCT/CN2020/115089 priority patent/WO2022047828A1/en
Publication of CN112116631A publication Critical patent/CN112116631A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An industrial augmented reality combined positioning system comprises an intelligent sensing module, a base station positioning module and a service decision module; the base station positioning module is based on the existing HTCVIVE positioning system and comprises a base station, a helmet and a server, wherein the base station emits laser, a sensor on the helmet receives a laser signal, data is transmitted to the server to be analyzed, the pose of the helmet relative to the base station is calculated, the intelligent sensing module comprises a portable microprocessor and a camera integrated on the helmet, image information is transmitted to a microprocessor by the camera, an object is identified and the pose of a head scene object is calculated through a built-in positioning and identifying algorithm, the service decision module receives the pose from the intelligent sensing module and helmet information detected by the base station, a final pose is obtained through a coordinate conversion relation, and meanwhile, according to different application scenes, a three-dimensional model is fused in a video stream in real time according to the poses, so that the effect of augmented reality is achieved.

Description

Industrial augmented reality combined positioning system
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of augmented reality positioning, in particular to an industrial augmented reality combined positioning system.
[ background of the invention ]
Augmented reality, AR for short, is a technology that calculates the position and angle of a camera image in real time and adds a corresponding image, and the goal of this technology is to wrap a virtual world around the real world on a screen and interact.
The base station of the HTC VIVE at the present stage is the basis of a Lighthouse tracking system, and comprises a plurality of positioning base stations, a head-mounted display, an interactive handle and the like, wherein transverse and longitudinal lasers which are scanned alternately are used for detecting the HTC VIVE head display, and a small sensor in the head display can detect the passing lasers at the same time. The system will then intelligently combine all the data to identify the rotation of the device, and the location in 3D space. However, the positioning mode can only obtain the pose of the user relative to the base station, and the AR scene needing interaction and identification is difficult to meet.
Therefore, it is an urgent problem to be solved in the art to provide an industrial augmented reality combined positioning system.
[ summary of the invention ]
Aiming at the problems, the industrial augmented reality combined positioning system comprises an intelligent sensing module, a base station positioning module and a service decision module; the base station positioning module is based on the existing HTC VIVE positioning system and comprises a base station, a helmet and a server, wherein the base station emits laser, a sensor on the helmet receives a laser signal, data is transmitted to the server to be analyzed, the pose of the helmet relative to the base station is calculated, the intelligent sensing module comprises a portable microprocessor and a camera integrated on the helmet, image information is transmitted to a microprocessor by the camera, an object is identified and the pose of a head scene object is calculated through a built-in positioning and identifying algorithm, the service decision module receives the pose from the intelligent sensing module and helmet information detected by the base station, a final pose is obtained through a coordinate conversion relation, and meanwhile, according to different application scenes, a three-dimensional model is fused in a video stream in real time according to the pose, and the effect of enhancing reality is achieved.
Further, the workflow of the positioning system is as follows:
step 1: constructing a perception space;
step 2: acquiring the pose of a camera;
and step 3: acquiring the pose of the helmet;
and 4, step 4: computing HTC VIVE positioning system pose
And 5: constructing a fused AR scene;
step 6: and rendering and displaying.
Further, the process and method of the coordinate transformation are as follows:
the method comprises the following steps: the camera adopts the front end of a VINS-Fusion algorithm to process the collected images, SIFT feature point extraction is firstly carried out on the images by utilizing Opencv, and the motion relation between two frames of images is calculated according to the feature matching relation between the two frames of images.
Step two: the camera coordinate system takes a left eye as a reference, firstly calculates the pose of the helmet relative to a scene by utilizing coordinate transformation, and then sends the pose to the HTC VIVE positioning system through the server.
Step three: in the HTC VIVE positioning system, the coordinate transformation of the coordinate system of the HTC VIVE virtual scene relative to the real world is calculated through the transformation of the helmet camera coordinate system, the base station coordinate system and the world coordinate system.
Further, the step 1 comprises opening the server, the helmet, the camera and the portable microprocessor, unifying the coordinate system, connecting the HTC VIVE and the portable microprocessor, fixing the camera right in front of the helmet, measuring the coordinate system of the camera and the coordinate system of the helmet, and calculating R and T of the camera and the coordinate system of the helmet.
Further, the step 2 includes that the camera acquires images of a scene, then the images are preprocessed, optical flow feature extraction is carried out on the images, continuous frame image optical flow tracking is carried out, and stable and sustainable tracking optical flow points are subjected to image matching, so that the Rotation and the Position between the two frames of images are obtained.
Furthermore, in the step 2, the positions and postures of the helmet are positioned through the two base stations, the base stations emit laser, and after the sensor on the helmet detects a laser signal, the positions and postures of the helmet relative to the base stations, namely the Position and the Rotation, can be calculated and sent to the server.
The industrial augmented reality combined positioning system has the following beneficial effects:
1. the method realizes the augmented reality effect on the basis of the HTC VIVE positioning system, expands the original effect of the HTC VIVE and realizes a larger range of motion.
2. The intelligent sensing module carried by the system has strong expansibility except a scene sensing function, can be linked by multiple persons if an object identification module, a pedestrian detection module and the like can be added, and realizes the functions of entertainment, interaction and the like of multiple persons in a real scene.
[ description of the drawings ]
FIG. 1 is a system architecture diagram of the present invention.
FIG. 2 is a diagram of coordinate transformation relationships in the present invention.
FIG. 3 is a diagram of a camera matching model in the present invention.
Fig. 4 is a schematic diagram of a base station coordinate system, a geodetic coordinate system and a helmet coordinate system in accordance with the present invention.
Fig. 5 is a schematic diagram of a camera coordinate system and a helmet coordinate system according to the present invention.
[ detailed description ] embodiments
The directional terms of the present invention, such as "up", "down", "front", "back", "left", "right", "inner", "outer", "side", etc., are only directions in the drawings, and are only used to explain and illustrate the present invention, but not to limit the scope of the present invention.
Referring to fig. 1, the industrial augmented reality combined positioning system of the present invention includes an intelligent sensing module, a base station positioning module and a service decision module;
the base station positioning module is based on the existing HTC VIVE positioning system and comprises a base station 1, a helmet 2 and a server 3, wherein the base station emits laser, a sensor on the helmet receives a laser signal, data are transmitted to the server for analysis, and the pose of the helmet relative to the base station is calculated.
The intelligent sensing module comprises a portable microprocessor 4 and a camera 5 integrated on the helmet 2, the camera transmits image information to the microprocessor, and the object is identified and the pose of the scene object is calculated through a built-in positioning and identifying algorithm.
The service decision module receives the pose from the intelligent sensing module and helmet information detected by the base station, obtains a final pose through a coordinate conversion relation, and simultaneously fuses the three-dimensional model in a video stream according to the pose in real time according to different application scenes, so that the effect of augmented reality is achieved.
Referring to fig. 2, the process and method of the coordinate transformation are as follows:
the method comprises the following steps: the camera adopts the front end of a VINS-Fusion algorithm to process the collected images, SIFT feature point extraction is firstly carried out on the images by utilizing Opencv, and the motion relation between two frames of images is calculated according to the feature matching relation between the two frames of images.
Step two: the camera coordinate system takes a left eye as a reference, firstly calculates the pose of the helmet relative to a scene by utilizing coordinate transformation, and then sends the pose to the HTC VIVE positioning system through the server.
Step three: in the HTC VIVE positioning system, the coordinate transformation of the coordinate system of the HTC VIVE virtual scene relative to the real world is calculated through the transformation of the helmet camera coordinate system, the base station coordinate system and the world coordinate system.
The calculation of R1 and T1 (where R represents a rotation matrix and T represents a displacement matrix) of the camera relative to the object in FIG. 2 is as follows, and as shown in FIG. 3, the camera 5 is in continuous motion, with two frames of images I in motion1And I2Let the image motion between two frames be R, T. The center of motion of the two cameras is O1And O2Now consider I1In which there is a characteristic point p1In I, it is2Corresponding to feature point p2The two feature points are obtained by feature matching.
In the coordinate system of the first frame, let the spatial position of P be:
P=[X,Y,Z]T (1)
according to the pinhole model of the camera, two pixel points p1And p2The pixel positions of (a) are:
s1p1=KP,s2p2=K(RP+t) (2)
taking: x is the number of1=K-1p1,x2=K-1p2 (3)
Wherein x1,x2Is the coordinate on the normalized plane of the two pixels. Substituting the formula to obtain:
x2=Rx1+t (4)
simultaneous left multiplication of two sides by tΛAnd
Figure BDA0002668603620000061
Figure BDA0002668603620000062
re-substituting p1And p2The method comprises the following steps:
Figure BDA0002668603620000063
the above equation is called epipolar constraint, which contains both translation and rotation, two matrices in between: a base matrix F and an essential matrix E, wherein:
E=tΛR (7)
and recovering the motion R, t of the camera according to the essence matrix E. R, t can be obtained according to the Singular Value Decomposition (SVD) method.
R, t calculated in this process is R in FIG. 21,T1
The R3 and T3 (where R represents the rotation matrix and T represents the displacement matrix) calculations for the helmet of fig. 2 relative to the object are as follows, as shown in fig. 4, defining three relevant coordinate systems: a geodetic coordinate system (world coordinate system) G, a Lighthouse optical coordinate system C, and a helmet coordinate system b. After the HTC VIVE system is calibrated, the HTC VIVE helmet can be used normally. The calibration process mainly comprises the following two formulas:
transformation relation from the geodetic coordinate system to the body coordinate system:
Figure BDA0002668603620000064
transformation relation of the Lighthouse coordinate system to the body coordinate system:
Figure BDA0002668603620000065
selecting 5 points and 15 unknowns on the helmet rigid body, 15 equations, and calculating calibration parameters through formulas (8) and (9): lighthouse coordinate system laser scanning center: (x)01,y01) And (x)02,y02),
Figure BDA0002668603620000071
And
Figure BDA0002668603620000072
after all the calibration parameters are obtained, the known rigid body coordinate system is obtained to obtain the coordinates and the postures of the geodetic coordinate system.
Forward calculation of GpAnd
Figure BDA0002668603620000073
Figure BDA0002668603620000074
9 unknowns, which can be obtained by 9 equations:
Figure BDA0002668603620000075
Cp1,Cp2,Cp3
and the following steps:
Figure BDA0002668603620000076
can obtain Gp1,Gp2,Gp3
Figure BDA0002668603620000077
R3 and T3 are obtained simultaneously.
Referring to fig. 5, the camera coordinate system is O, and the helmet coordinate system is O1The motion of the camera can be estimated through the matching relation of the feature points between the continuous motion frames of the camera, and the pose of the helmet can be calculated in real time through an HTC VIVE base station positioning system. By the transformation relation between the optical coordinate system and the local coordinate system in the positioning principle of the base station, the pose relation between the camera and the base station can be determined only by calculating the pose change relation between the camera and the helmet, and the transformation from the image coordinate system to the camera coordinate system and then to the HTC VIVE coordinate system is realized.
Normalizing a point P in a coordinate system with a camera1(x, y) for example, which maps to the helmet coordinate system P2The process of (x, y) satisfies formula (12).
Figure BDA0002668603620000078
Figure BDA0002668603620000079
In equation (13), R represents the rotation between the two coordinate systems, and observation shows that there is no rotation between the two. t represents the displacement between the two coordinate systems, and the value is shown in formula (14), and the specific value is based on the actual measurement value.
Figure BDA0002668603620000081
Through the formula, the pixels of the video frame are converted into the coordinate system of the HTC VIVE, and the conversion from the camera coordinate system to the base station coordinate system is realized.
The work flow of the industrial augmented reality combined positioning system is as follows:
step 1: constructing a perceptual space
And opening the server, the helmet, the camera and the portable microprocessor, unifying the coordinate system, connecting the HTC VIVE and the portable microprocessor, fixing the camera in front of the helmet, measuring the coordinate system of the camera and the coordinate system of the helmet, and calculating the R and the T of the camera and the coordinate system of the helmet.
Step 2: obtaining pose of camera
The camera acquires an image of a scene, and then pre-processes the image (distortion correction, adaptive local histogram homogenization); then, extracting optical flow characteristics of the images, and tracking the optical flow of the continuous frame images; and carrying out image matching on the stable and sustainable tracking optical flow points to obtain the Rotation (x, y, z, w) and the Position (x, y, z) between the two frames of images.
And step 3: obtaining the pose of the helmet
The Position and pose of the helmet are positioned through the two base stations, the base stations emit laser, after the sensor on the helmet detects a laser signal, the Position and pose of the helmet relative to the base stations, namely Position (x, y, z) and Rotation (x, y, z, w), can be calculated, and the Position and pose are sent to the server.
And 4, step 4: computing HTC VIVE positioning system pose
Note that R1, T1 is T1:
Figure BDA0002668603620000091
note that R2, T2 is T2:
Figure BDA0002668603620000092
note that R3, T3 is T3:
Figure BDA0002668603620000093
the final R, T is then:
T=T1·T2·T3
Figure BDA0002668603620000094
and finally obtaining R and T through SVD decomposition.
And 5: constructing a fused AR scene
And designing a scene corresponding to the pose and the model, and loading and rendering the corresponding virtual model into the real world when the pose of the system meets the requirements set by a program.
Step 6: rendering displays
And outputting the rendered AR scene to a display unit of the helmet for display.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (6)

1. An industrial augmented reality combined positioning system is characterized by comprising an intelligent perception module, a base station positioning module and a service decision module;
the base station positioning module is based on the existing HTC VIVE positioning system and comprises a base station (1), a helmet (2) and a server (3), wherein the base station emits laser, a sensor on the helmet receives a laser signal, data is transmitted to the server for analysis, and the pose of the helmet relative to the base station is calculated;
the intelligent sensing module comprises a portable microprocessor (4) and a camera (5) integrated on the helmet (2), the camera transmits image information to the microprocessor, and the object is identified and the pose of the head scene object is calculated through a built-in positioning and identifying algorithm;
the service decision module receives the pose from the intelligent sensing module and helmet information detected by the base station, obtains a final pose through a coordinate conversion relation, and simultaneously fuses the three-dimensional model in a video stream according to the pose in real time according to different application scenes, so that the effect of augmented reality is achieved.
2. The combined industrial augmented reality positioning system of claim 1, wherein the workflow of the positioning system is as follows:
step 1: constructing a perception space;
step 2: acquiring the pose of a camera;
and step 3: acquiring the pose of the helmet;
and 4, step 4: computing HTC VIVE positioning system pose
And 5: constructing a fused AR scene;
step 6: and rendering and displaying.
3. The combined positioning system of industrial augmented reality as claimed in claim 1, wherein the coordinate transformation process and method are as follows:
the method comprises the following steps: the camera adopts the front end of a VINS-Fusion algorithm to process the collected images, SIFT feature point extraction is firstly carried out on the images by utilizing Opencv, and the motion relation between two frames of images is calculated according to the feature matching relation between the two frames of images.
Step two: the camera coordinate system takes a left eye as a reference, firstly calculates the pose of the helmet relative to a scene by utilizing coordinate transformation, and then sends the pose to the HTC VIVE positioning system through the server.
Step three: in the HTC VIVE positioning system, the coordinate transformation of the coordinate system of the HTC VIVE virtual scene relative to the real world is calculated through the transformation of the helmet camera coordinate system, the base station coordinate system and the world coordinate system.
4. The combined positioning system of industrial augmented reality as claimed in claim 2, wherein step 1 comprises turning on the server, the helmet, the camera and the portable microprocessor and unifying the coordinate system, connecting the HTC VIVE and the portable microprocessor, fixing the camera right in front of the helmet, measuring the camera coordinate system and the helmet coordinate system at the same time, and calculating R and T of the two.
5. The combined positioning system of industrial augmented reality as claimed in claim 2, wherein the step 2 includes capturing images of a scene by a camera, preprocessing the images, performing optical flow feature extraction on the images, performing optical flow tracking on continuous frames of images, and performing image matching on stable and continuously tracked optical flow points to obtain a Rotation and a Position between two frames of images.
6. The combined positioning system of the industrial augmented reality as claimed in claim 2, wherein in step 2, the Position of the helmet is located by two base stations, the base stations emit laser, and after the sensor on the helmet detects the laser signal, the Position of the helmet relative to the base stations, i.e. Position and Rotation, can be calculated and sent to the server.
CN202010926641.5A 2020-09-07 2020-09-07 Industrial augmented reality combined positioning system Pending CN112116631A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010926641.5A CN112116631A (en) 2020-09-07 2020-09-07 Industrial augmented reality combined positioning system
PCT/CN2020/115089 WO2022047828A1 (en) 2020-09-07 2020-09-14 Industrial augmented reality combined positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010926641.5A CN112116631A (en) 2020-09-07 2020-09-07 Industrial augmented reality combined positioning system

Publications (1)

Publication Number Publication Date
CN112116631A true CN112116631A (en) 2020-12-22

Family

ID=73802291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010926641.5A Pending CN112116631A (en) 2020-09-07 2020-09-07 Industrial augmented reality combined positioning system

Country Status (2)

Country Link
CN (1) CN112116631A (en)
WO (1) WO2022047828A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967341A (en) * 2021-02-23 2021-06-15 湖北枫丹白露智慧标识科技有限公司 Indoor visual positioning method, system, equipment and storage medium based on live-action image
CN115514885A (en) * 2022-08-26 2022-12-23 燕山大学 Monocular and binocular fusion-based remote augmented reality follow-up perception system and method
CN116372954A (en) * 2023-05-26 2023-07-04 苏州融萃特种机器人有限公司 AR immersed teleoperation explosive-handling robot system, control method and storage medium
TWI812369B (en) * 2021-07-28 2023-08-11 宏達國際電子股份有限公司 Control method, tracking system and non-transitory computer-readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529808B (en) * 2022-04-21 2022-07-19 南京北控工程检测咨询有限公司 Pipeline detection panoramic shooting processing system and method
CN116664681B (en) * 2023-07-26 2023-10-10 长春工程学院 Semantic perception-based intelligent collaborative augmented reality system and method for electric power operation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739704A (en) * 2016-02-02 2016-07-06 上海尚镜信息科技有限公司 Remote guidance method and system based on augmented reality
CN108416846A (en) * 2018-03-16 2018-08-17 北京邮电大学 It is a kind of without the three-dimensional registration algorithm of mark
CN109032329B (en) * 2018-05-31 2021-06-29 中国人民解放军军事科学院国防科技创新研究院 Space consistency keeping method for multi-person augmented reality interaction
US10551623B1 (en) * 2018-07-20 2020-02-04 Facense Ltd. Safe head-mounted display for vehicles
CN110858414A (en) * 2018-08-13 2020-03-03 北京嘀嘀无限科技发展有限公司 Image processing method and device, readable storage medium and augmented reality system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967341A (en) * 2021-02-23 2021-06-15 湖北枫丹白露智慧标识科技有限公司 Indoor visual positioning method, system, equipment and storage medium based on live-action image
TWI812369B (en) * 2021-07-28 2023-08-11 宏達國際電子股份有限公司 Control method, tracking system and non-transitory computer-readable storage medium
CN115514885A (en) * 2022-08-26 2022-12-23 燕山大学 Monocular and binocular fusion-based remote augmented reality follow-up perception system and method
CN115514885B (en) * 2022-08-26 2024-03-01 燕山大学 Remote augmented reality follow-up sensing system and method based on monocular and binocular fusion
CN116372954A (en) * 2023-05-26 2023-07-04 苏州融萃特种机器人有限公司 AR immersed teleoperation explosive-handling robot system, control method and storage medium

Also Published As

Publication number Publication date
WO2022047828A1 (en) 2022-03-10

Similar Documents

Publication Publication Date Title
CN112116631A (en) Industrial augmented reality combined positioning system
US10757373B2 (en) Method and system for providing at least one image captured by a scene camera of a vehicle
CN108307675B (en) Multi-baseline camera array system architecture for depth enhancement in VR/AR applications
TWI574223B (en) Navigation system using augmented reality technology
US9129435B2 (en) Method for creating 3-D models by stitching multiple partial 3-D models
US8928736B2 (en) Three-dimensional modeling apparatus, three-dimensional modeling method and computer-readable recording medium storing three-dimensional modeling program
KR20180101496A (en) Head-mounted display for virtual and mixed reality with inside-out location, user body and environment tracking
TWI496108B (en) AR image processing apparatus and method
JP4284664B2 (en) Three-dimensional shape estimation system and image generation system
KR101506610B1 (en) Apparatus for providing augmented reality and method thereof
JP6491517B2 (en) Image recognition AR device, posture estimation device, and posture tracking device
US10706584B1 (en) Hand tracking using a passive camera system
JPWO2019035155A1 (en) Image processing system, image processing method, and program
CN112102389A (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
WO2018075053A1 (en) Object pose based on matching 2.5d depth information to 3d information
JP7379065B2 (en) Information processing device, information processing method, and program
JP6762913B2 (en) Information processing device, information processing method
CN112184793B (en) Depth data processing method and device and readable storage medium
GB2588441A (en) Method and system for estimating the geometry of a scene
KR20050061115A (en) Apparatus and method for separating object motion from camera motion
CN111354088B (en) Environment map building method and system
CN112150609A (en) VR system based on indoor real-time dense three-dimensional reconstruction technology
Swadzba et al. Tracking objects in 6D for reconstructing static scenes
JP2005141655A (en) Three-dimensional modeling apparatus and three-dimensional modeling method
US11847784B2 (en) Image processing apparatus, head-mounted display, and method for acquiring space information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201222

WD01 Invention patent application deemed withdrawn after publication