WO2022014805A1 - 교차 공분산 3d 좌표 추정을 통한 가상공간 이동플랫폼 구축 방법 - Google Patents
교차 공분산 3d 좌표 추정을 통한 가상공간 이동플랫폼 구축 방법 Download PDFInfo
- Publication number
- WO2022014805A1 WO2022014805A1 PCT/KR2020/019158 KR2020019158W WO2022014805A1 WO 2022014805 A1 WO2022014805 A1 WO 2022014805A1 KR 2020019158 W KR2020019158 W KR 2020019158W WO 2022014805 A1 WO2022014805 A1 WO 2022014805A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual space
- marker
- covariance
- dimensional
- rigid body
- Prior art date
Links
- 238000010276 construction Methods 0.000 title claims abstract description 13
- 239000003550 marker Substances 0.000 claims abstract description 85
- 230000003287 optical effect Effects 0.000 claims abstract description 43
- 238000000034 method Methods 0.000 claims description 43
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000012937 correction Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 238000002474 experimental method Methods 0.000 description 9
- 238000004088 simulation Methods 0.000 description 8
- 230000009466 transformation Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Definitions
- the present invention relates to a method for constructing a moving platform in a virtual space.
- the coordinates of a marker projected on a two-dimensional image plane photographed through multiple cameras are estimated as three-dimensional coordinates using cross covariance, and 2 estimated in different spaces
- It relates to a method of constructing a virtual space movement platform through cross-covariance 3D coordinate estimation that integrates three or more three-dimensional coordinates into one virtual space.
- the virtual space mobile platform is widely used in fields such as training in situations that are difficult to actually experience through simulation in virtual reality, experiential education that is not restricted by location, and game contents. and provide interactive feedback to users.
- equipment such as a head mounted display (HMD) is used to give the user a realistic feeling.
- HMD head mounted display
- motion capture is performed through an optical position tracking method using multiple cameras. That is, in a multi-camera-based location tracking system for user movement tracking, 3D coordinates of an active tracker marker are estimated, and this optical location tracking system is a positioning method using multiple cameras and requires a correction process.
- a camera calibration method in the case of a single camera, there is a method using a chess board having a two-dimensional grid pattern.
- a method using a chess board, a method using a 3-axis correction frame, a method using a correction bar, etc. There is this.
- the active tracker marker in 3D space viewed from multiple cameras is estimated as 3D coordinates again from the coordinates of the marker projected on the 2D image.
- the absolute position of the user in the virtual space is tracked.
- the markers projected on multiple cameras in this process are coordinate information randomly arranged without order, the coordinates of the markers between the cameras must be matched.
- complex calculations are required in the process of matching the coordinates of the markers between cameras, and in the case of low-resolution cameras, errors occur due to overlapping problems, etc.
- the optical positioning system since the conventional optical positioning system builds one virtual space for one real space, the optical positioning system installed in a separate space creates a separate virtual space for each individual space. Accordingly, when a user equipped with an active tracker marker moves or moves from space 1 to space 2, it is not moved in a single virtual space, but is tracked in independent virtual spaces, thereby reducing the sense of reality.
- the present invention has been proposed to solve a problem that occurs in the process of estimating the three-dimensional coordinates of a two-dimensional marker of a conventional optical position tracking system, and an object of the present invention is to calculate the cross covariance from the coordinates of the marker projected on a two-dimensional image.
- An object of the present invention is to provide a method of constructing a virtual space movement platform that tracks a user's location in a virtual space by estimating three-dimensional coordinates, and integrates independent virtual spaces into one virtual space.
- the virtual space movement platform construction method through cross-covariance 3D coordinate estimation according to the present invention for achieving the above object is a virtual space movement in which the three-dimensional coordinates of a marker are estimated by analyzing an image captured through multiple cameras and placed in a virtual space
- a platform construction method comprising: identifying a marker attached to an active tracker photographed through two or more multiple cameras in an optical position tracking unit to obtain a two-dimensional covariance for the marker; obtaining a cross covariance in which the two-dimensional covariance obtained for one marker overlaps, and estimating the three-dimensional coordinates of the marker through the obtained cross covariance to track the position of the rigid body;
- the three-dimensional coordinates of the rigid body are received from the integrated control middleware from a plurality of optical position tracking units arranged in separate spaces, respectively, and the received three-dimensional coordinate information of the rigid body is integrated and placed in one virtual space to form a virtual space movement platform. Constructing a; includes.
- the integrated control middleware corrects the three-dimensional coordinates of each rigid body received from a plurality of optical position tracking units so that the three-dimensional coordinates of each rigid body do not overlap in one virtual space, by moving and rotating the origin of the three-dimensional coordinates of the rigid body for each individual space. .
- the three-dimensional coordinates of the rigid body transmitted from the optical position tracking unit are moved by x offset , y offset , and z offset after the x, y, and z axes are rotated by ⁇ in the Yaw axis through the following equation to be placed in a virtual space.
- the covariance matrix of the marker for one camera is obtained through the following equation.
- the cross covariance P CI for finding the center point of one marker viewed from the two cameras (C 1 , C 2 ) is calculated using the following equation, which is a convex combination of individual tracking using a weight ⁇ .
- the position of the marker in the three-dimensional space is estimated by calculating the cross covariance for one marker through three or more multiple cameras.
- the marker of the active tracker is estimated as three-dimensional coordinates using cross covariance from the coordinates of the marker projected on the two-dimensional image, and then the independent virtual space is integrated into one virtual space.
- the computation speed can be improved because there is no need to go through the conventional complicated computation process, and the three-dimensional position calculation with high precision even at low resolution can be performed. There is a possible effect.
- FIG. 1 is an overall conceptual diagram of a virtual space mobile platform construction system according to the present invention
- Figure 2 is a conceptual diagram of the installation of the optical position tracking unit according to the installed in one separate space according to the present invention
- FIG. 3 is a block diagram of an optical positioning unit installed in a separate space according to the present invention.
- FIG. 4 is a conceptual diagram illustrating a three-dimensional coordinate estimation method using cross covariance according to the present invention
- FIG. 5 is a conceptual diagram showing the covariance for a point viewed from the camera according to the present invention.
- FIG. 6 is a conceptual diagram showing a method for finding the cross covariance point of two cameras according to the present invention.
- FIG. 10 is an example of a multi-camera arrangement for testing cross covariance in a three-dimensional space according to the present invention
- 11 is an example of a screen in which the position of a marker in a three-dimensional space is estimated by calculating the cross covariance according to the present invention
- 13 is a position of a marker estimated in three-dimensional space according to the present invention.
- 15 is an example in which two centers of data sets A and B according to the present invention are rotated to come to the origin;
- 16 is an example of a screen in which rigid body transformation is implemented in a monitoring program according to the present invention.
- 18 is a monitoring screen that tracks the movement of the tester wearing a hard hat according to the present invention once;
- 19 is a monitoring screen that tracks the tester wearing a helmet according to the present invention while walking, sitting and getting up and walking again;
- 20 is a view showing a monitoring screen in which the tester wearing the helmet according to the present invention tracked walking again after running in place while walking.
- 21 is an example of data transmitted from one optical position tracking unit to the integrated control middleware according to the present invention.
- 22 is an example of a situation in which rigid bodies located in a separate space according to the present invention overlap each other when they are integrated into one virtual space by the integrated control middleware;
- FIG. 1 shows an overall conceptual diagram of a virtual space mobile platform construction system according to an embodiment of the present invention.
- the virtual space moving platform construction system is a plurality of optical position tracking units ( 100) and an integrated control middleware 200 that receives and integrates each of the three-dimensional coordinate information of the marker from the plurality of optical position tracking units 100, converts them into coordinates in a single virtual space, and arranges them.
- the location information of each rigid body arranged in one virtual space by the integrated control middleware 200 can be simulated as one virtual space through the control unit, and is used for content production in environments such as virtual/augmented/mixed reality. can be
- FIG. 2 is a conceptual diagram illustrating an installation of an optical positioning unit installed in a separate space according to an embodiment of the present invention
- FIG. 3 is a block diagram illustrating an optical positioning unit.
- the optical position tracking unit 100 captures a marker attached to an active tracker located in one separate individual space with a plurality of cameras, and then analyzes the captured camera image for the marker. By grasping the position and movement of the marker, the three-dimensional coordinates of the marker are estimated.
- an active tracker is a device worn by a moving person or object (hereinafter, collectively referred to as "user"), and may be a hard hat, gloves, vest, or shoes. is attached
- the marker images taken through the plurality of cameras are analyzed through the optical position tracking unit 100, which is a computing device equipped with a calculation function.
- a camera correction module 110 for performing each correction, and a three-dimensional marker coordinate estimation module 120 for estimating three-dimensional marker coordinates by analyzing images of markers photographed through a plurality of corrected cameras are provided.
- the camera correction module 110 estimates the internal and external variables of the camera using various methods such as a chess board having a two-dimensional grid pattern, a three-axis correction frame, and a correction rod, and corrects the camera through this.
- the camera correction module 110 performs multi-camera correction by estimating internal and external variables of the camera using a chess board and a correction rod. This camera correction method is registered by the applicant It is presented in Patent No. 10-2188480.
- the three-dimensional marker coordinate estimation module 120 identifies the image of the marker photographed through a plurality of corrected cameras, and estimates the three-dimensional coordinates using the cross covariance from the coordinates of the two-dimensional marker photographed by the multiple cameras.
- the 3D coordinate information of the marker estimated by the 3D marker coordinate estimation module 120 means 3D coordinates for an individual space to which the marker belongs.
- the 3D coordinate information of the marker estimated in this way is the integrated control middleware 200 ), and the integrated control middleware 200 integrates the 3D coordinate information of the markers estimated in each individual space and arranges them into one virtual space.
- FIG. 4 is a conceptual diagram illustrating a three-dimensional coordinate estimation method using cross covariance according to an embodiment of the present invention.
- the three-dimensional coordinates are estimated from the coordinates of the two-dimensional markers in the image viewed from multiple cameras.
- the coordinates of the markers viewed from the individual cameras are expressed in the form of covariance based on probability, and the point where the covariances of the markers viewed from all cameras overlap are estimated as the probabilistic positions of the 3D markers.
- FIG. 5 is a conceptual diagram illustrating covariance of a point viewed from a camera according to an embodiment of the present invention.
- the angle ⁇ m refers to an angle spread from the midpoint of the camera to a point
- the two-dimensional covariance matrix C x,y can be expressed as in Equation 1 below.
- ⁇ represents covariance.
- Covariance represents the correlation between two different random variables, which means the product of the deviations of the two random variables.
- the covariance matrix rotated by ⁇ spread from the camera midpoint is P and the covariance center can be expressed as Equations 2 and 3 below.
- the lengths ⁇ 1 , ⁇ 2 of each axis are as in the following Equations 4 and 5, where the ⁇ 1 coefficient uses the maximum distance D of the marker detectable by the camera, and ⁇ 2 is the pixel and the pixel based on the camera coordinate system.
- FIG. 6 is a conceptual diagram showing the way of finding the intersection point of the two covariance camera, shows the two cameras C 0, shape of cross-covariance P CI for finding the center point of the marker as viewed from a C 1.
- Equation 6 calculates the cross covariance P CI with the covariances P 1 and P 2 of individual cameras
- Equation 7 is the center of the individual camera covariance of Equation 3 Through P CI and calculates the center position of the cross-covariance x CI.
- FIG. 7 is an example of setting the camera position and the angle ⁇ from the camera center (c1) (c2) to the p1, p2, p3 markers projected on the camera through the CAD program in order to proceed with the simulation experiment, and FIG. The result of implementing cross-covariance through MATLAB with the simulation setting value of 7 is shown.
- Equation 8 the covariance superposition calculation uses a 3 ⁇ 3 matrix as shown in Equation 8, instead of a 2 ⁇ 2 matrix in Equation 1, as shown in Equation 8 below.
- Equation 9 shows an example of the form of a three-dimensional covariance matrix, since the three-dimensional covariance matrix is an ellipsoid existing in three axes unlike the two-dimensional covariance matrix, the length ⁇ 1 of each axis in the covariance matrix is ⁇ 2 through Equation 4, ⁇ 2 , ⁇ 3 is obtained through Equation 5.
- ⁇ 2 is calculated through the calculated angle ⁇ l between pixels in the x-axis direction based on the camera coordinate system
- ⁇ 3 is calculated in the y-axis direction based on the camera coordinate system.
- FIG. 10 assuming that a marker is at the center of the camera, the covariance passing through the marker from the camera origin is expressed.
- 11 is an example of a screen in which the position of a marker is estimated in a three-dimensional space by calculating cross covariance, and it can be seen that the center of the marker viewed from all cameras is estimated.
- FIG. 12 is an example of a helmet with a marker worn by the user
- FIG. 13 shows the position of the marker estimated in a three-dimensional space through cross covariance.
- a marker is attached to the helmet worn by the user in order to track the user in the virtual space moving platform. At least three markers are required to obtain the position information of the helmet, and at this time, the position information of the markers was combined to create a rigid body with position and posture information.
- the collection of selected markers is set as a single rigid body, and the marker collection of the selected markers among all the markers found in the 3D space is aligned and the optimal rotation and position movement are estimated by comparing the two.
- Equations 9, 10, and 11 below represent rigid body transformation equations.
- P means a point
- P i A and P i B are points of data sets A and B, respectively, and these values are used in the next step by obtaining their central points.
- the two data sets are moved to the origin so that the two centers come to the origin.
- FIG. 15 shows an example in which the two centers of the data sets A and B are rotated to come to the origin.
- Equation 12 creates a 3 ⁇ 3 matrix H through the 3 ⁇ N matrix of the data set A and the N ⁇ 3 matrix of the data set B from which the center position information has been removed from the two data sets.
- Equation 13 obtains the rotation matrix R of Equation 14 through singular value decomposition (SVD) for the H matrix.
- SVD singular value decomposition
- 16 is an example of a screen in which rigid body transformation is implemented in a monitoring program in an embodiment of the present invention, and the position and posture in three-dimensional space are expressed by tracking the rigid body of the helmet.
- a location tracking experiment of a user wearing a hard hat was performed in virtual training using a virtual space moving platform. carried out.
- a 4m x 6m structure was installed on the ceiling, and 8 cameras were oriented toward a point on the ground, and internal and external variables of each camera were estimated through multi-camera calibration, and an optical location tracking monitoring program was developed. Thus, it was expressed as in FIG. 17 .
- the optical position tracking unit 100 for estimating the three-dimensional coordinates of the marker for an individual space transmits the estimated three-dimensional coordinate information of the marker to the integrated control middleware 200 .
- the space of the optical position tracking unit 100 since the space of the optical position tracking unit 100 has its own origin individually, in the integrated control middleware 200, the three-dimensional position of the marker transmitted from the optical position tracking unit 100 in each separate space. Information should be corrected and relocated in one virtual space.
- FIG. 21 shows an example of data transmitted from one optical position tracking unit to the integrated control middleware.
- the optical position tracking unit 100 transmits the 3D information of the marker to the integrated control middleware 200 as rigid body information.
- This rigid body information may be 3D coordinate information for one marker, and the active tracker Multiple marker information attached to can be integrated 3D coordinate information.
- the three-dimensional information of the marker for each individual space transmitted from the optical position tracking unit 100 to the integrated control middleware 200 will be described as rigid body information.
- the origin of the virtual space is the standard As a result, a rigid body of each optical position tracking unit 100 exists, and in this case, a situation in which the rigid bodies overlap may occur. That is, even though the rigid bodies tracked by each optical position tracking unit 100 are located in separate spaces, if they are simply integrated into one virtual space, a situation in which the spaces overlap each other may occur. 22 shows an example of a situation in which rigid bodies located in such separate spaces overlap each other when they are integrated into one virtual space by the integrated control middleware.
- the integrated control middleware 200 needs to correct the movement and rotation information of the origin of the rigid body for each individual space of the optical position tracking unit 100 .
- Equation 16 shows an equation in which the rigid three-dimensional coordinates of the optical position tracking unit 100 are changed to the coordinates of the integrated control middleware 200 and integrated.
- Equation 16 is a formula for rotating the coordinate system x, y, z of each optical position tracking unit 100 on the yaw axis by ⁇ and then moving by x offset , y offset , z offset , and integrated control middleware 200 is to correct the spatial information of the rigid body transmitted from each optical position tracking unit 100 through Equation 16.
- the position of the rigid body to which the marker is attached can be grasped by tracking the three-dimensional coordinates of the markers located in individual spaces in the optical position tracking unit 100 , and each optical position tracking in the integrated control middleware 200 .
- the origin of the position of the rigid body transmitted from the unit 100 and arranging it in one virtual space it is possible to track the movement and movement of the rigid body in one space.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (7)
- 다중 카메라를 통하여 촬영되는 영상을 분석하여 마커의 3차원 좌표를 추정하고 가상공간에 배치하는 가상공간 이동플랫폼 구축 방법에 있어서,(a) 광학식 위치 추적부(100)에서 2개 이상의 다중 카메라를 통하여 촬영된 액티브 트래커에 부착된 마커를 식별하여, 마커에 대한 2차원 공분산을 구하는 단계와;(b) 하나의 마커에 대해 구해진 2차원 공분산이 중첩하는 교차 공분산을 구하고, 구해진 교차 공분산을 통하여 마커의 3차원 좌표를 추정하여 강체의 위치를 추적하는 단계와;(c) 분리된 공간에 각각 배치된 복수의 광학식 위치 추적부(100)로부터 강체의 3차원 좌표를 통합관제 미들웨어(200)에서 각각 전송받아, 전송받은 강체의 3차원 좌표 정보를 통합하여 하나의 가상공간에 배치하여 가상공간 이동플랫폼을 구축하는 단계;를 포함하는 것을 특징으로 하는 가상공간 이동플랫폼 구축 방법.
- 제 1항에 있어서,상기 통합관제 미들웨어(200)는복수의 광학식 위치 추적부(100)로부터 전송받은 각각의 강체의 3차원 좌표가 하나의 가상공간에 중첩되지 않도록, 각각 개별 공간별로 강체의 3차원 좌표의 원점 이동 및 회전을 통해 수정하는 것을 특징으로 하는 가상공간 이동플랫폼 구축 방법.
- 제 1항에 있어서,상기 교차 공분산을 통하여 3차원 좌표를 추정하는 단계(b)는3개 이상의 다중 카메라를 통하여 하나의 마커에 대한 교차 공분산을 계산하여 3차원 공간 상에서 마커의 위치를 추정하는 것을 특징으로 하는 가상공간 이동플랫폼 구축 방법.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0089030 | 2020-07-17 | ||
KR1020200089030A KR102478415B1 (ko) | 2020-07-17 | 2020-07-17 | 교차 공분산을 이용한 3차원 좌표 추정 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022014805A1 true WO2022014805A1 (ko) | 2022-01-20 |
Family
ID=79554357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/019158 WO2022014805A1 (ko) | 2020-07-17 | 2020-12-24 | 교차 공분산 3d 좌표 추정을 통한 가상공간 이동플랫폼 구축 방법 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102478415B1 (ko) |
WO (1) | WO2022014805A1 (ko) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102453561B1 (ko) * | 2022-07-08 | 2022-10-14 | 이엑스 주식회사 | 가상 스튜디오의 복합 센서 기반 다중 추적 카메라 시스템의 동작 방법 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100002803A (ko) * | 2008-06-30 | 2010-01-07 | 삼성전자주식회사 | 모션 캡쳐 장치 및 모션 캡쳐 방법 |
KR20180062137A (ko) * | 2016-11-30 | 2018-06-08 | (주)코어센스 | 하이브리드 모션캡쳐 시스템의 위치 추정 방법 |
KR20200064947A (ko) * | 2018-11-29 | 2020-06-08 | (주)코어센스 | 광학식 위치 트래킹 시스템 기반의 위치 추적 장치 및 그 방법 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101768958B1 (ko) | 2016-10-31 | 2017-08-17 | (주)코어센스 | 고품질 콘텐츠 제작을 위한 하이브리드 모션캡쳐 시스템 |
EP3711021A4 (en) * | 2017-11-13 | 2021-07-21 | Carmel-Haifa University Economic Corporation Ltd. | MOTION TRACKING WITH MULTIPLE 3D CAMERAS |
-
2020
- 2020-07-17 KR KR1020200089030A patent/KR102478415B1/ko active IP Right Grant
- 2020-12-24 WO PCT/KR2020/019158 patent/WO2022014805A1/ko active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100002803A (ko) * | 2008-06-30 | 2010-01-07 | 삼성전자주식회사 | 모션 캡쳐 장치 및 모션 캡쳐 방법 |
KR20180062137A (ko) * | 2016-11-30 | 2018-06-08 | (주)코어센스 | 하이브리드 모션캡쳐 시스템의 위치 추정 방법 |
KR20200064947A (ko) * | 2018-11-29 | 2020-06-08 | (주)코어센스 | 광학식 위치 트래킹 시스템 기반의 위치 추적 장치 및 그 방법 |
Non-Patent Citations (2)
Title |
---|
HA-HYUNG JEONG: "Cross-Covariance 3D Coordinate Estimation Method for Virtual Space Movement Platform", KOREAN-SPEAKING GOVERNMENT HAHOE NON-E-J, vol. 25, no. 5, 30 September 2020 (2020-09-30), Korea, pages 41 - 48, XP009533518, ISSN: 1229-3741, DOI: 10.9723/jksiis.2020.25.5.041 * |
ZILI DENG; PENG ZHANG; WENJUAN QI; JINFANG LIU; YUAN GAO;: "Sequential covariance intersection fusion Kalman filter", INFORMATION SCIENCES, ELSEVIER, AMSTERDAM, NL, vol. 189, 27 November 2011 (2011-11-27), AMSTERDAM, NL, pages 293 - 309, XP028441109, ISSN: 0020-0255, DOI: 10.1016/j.ins.2011.11.038 * |
Also Published As
Publication number | Publication date |
---|---|
KR102478415B1 (ko) | 2022-12-19 |
KR20220010304A (ko) | 2022-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018199701A1 (en) | Method for providing content and apparatus therefor | |
WO2019156518A1 (en) | Method for tracking hand pose and electronic device thereof | |
WO2016043560A1 (ko) | 옵티컬 트래킹 시스템 및 옵티컬 트래킹 시스템의 좌표계 정합 방법 | |
WO2022014805A1 (ko) | 교차 공분산 3d 좌표 추정을 통한 가상공간 이동플랫폼 구축 방법 | |
WO2014104574A1 (ko) | 선형배열 영상 센서와 자세제어 센서 간의 절대 오정렬 보정방법 | |
WO2015199502A1 (ko) | 증강현실 상호 작용 서비스 제공 장치 및 방법 | |
RU2123718C1 (ru) | Способ ввода информации в компьютер | |
WO2022039404A1 (ko) | 광시야각의 스테레오 카메라 장치 및 이를 이용한 깊이 영상 처리 방법 | |
WO2015183049A1 (ko) | 옵티컬 트래킹 시스템 및 옵티컬 트래킹 시스템의 마커부 자세 산출방법 | |
WO2014109546A1 (ko) | 운동하는 볼에 대한 센싱장치 및 센싱방법 | |
WO2020054954A1 (en) | Method and system for providing real-time virtual feedback | |
WO2021034006A1 (en) | Method and apparatus for rigging 3d scanned human models | |
WO2022065763A1 (en) | Display apparatus and method for controlling thereof | |
WO2022075691A1 (ko) | 카메라를 이용한 평면 이동 구체의 운동 센싱장치 및 방법과, 퍼팅매트를 이동하는 골프공의 운동 센싱장치 및 방법 | |
WO2022080549A1 (ko) | 이중 라이다 센서 구조의 모션 트래킹 장치 | |
JP2559939B2 (ja) | 3次元情報入力装置 | |
WO2019135462A1 (ko) | 번들 조정 알고리즘을 이용한 립모션과 hmd 사이의 캘리브레이션 방법 및 장치 | |
WO2023239035A1 (ko) | 손 동작에 관한 이미지 데이터를 획득하는 전자 장치 및 그 동작 방법 | |
WO2013081322A1 (ko) | 원 마커를 이용한 구형물체 비행정보 추정 방법 | |
KR20220092053A (ko) | 교차 공분산 3d 좌표 추정을 통한 가상공간 이동플랫폼 구축 방법 | |
WO2020116836A1 (ko) | 인체 무게 중심의 이동을 이용한 모션 캡쳐 장치 및 그 방법 | |
WO2021206209A1 (ko) | 스마트 팩토리 구축을 위한 마커리스 기반의 ar 구현 방법 및 시스템 | |
WO2021221333A1 (ko) | 맵 정보과 영상 매칭을 통한 실시간 로봇 위치 예측 방법 및 로봇 | |
WO2013176525A1 (ko) | 수술용 내비게이션 시스템의 증강현실을 위한 카메라 레지스트레이션 방법 | |
WO2020197109A1 (ko) | 치아 영상 정합 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20945331 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20945331 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20945331 Country of ref document: EP Kind code of ref document: A1 |