CN113673283A - Smooth tracking method based on augmented reality - Google Patents

Smooth tracking method based on augmented reality Download PDF

Info

Publication number
CN113673283A
CN113673283A CN202010408715.6A CN202010408715A CN113673283A CN 113673283 A CN113673283 A CN 113673283A CN 202010408715 A CN202010408715 A CN 202010408715A CN 113673283 A CN113673283 A CN 113673283A
Authority
CN
China
Prior art keywords
augmented reality
calculating
tracking method
frame
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010408715.6A
Other languages
Chinese (zh)
Inventor
陈广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weiya Shanghai Digital Technology Co ltd
Original Assignee
Weiya Shanghai Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weiya Shanghai Digital Technology Co ltd filed Critical Weiya Shanghai Digital Technology Co ltd
Priority to CN202010408715.6A priority Critical patent/CN113673283A/en
Publication of CN113673283A publication Critical patent/CN113673283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention discloses a smooth tracking method based on augmented reality, which comprises the following steps: s01: extracting a feature descriptor of a picture to be identified; s02: processing the captured video frame image to find an identification picture; s03: calculating a transformation matrix from the marker coordinate system to the camera coordinate system; s04: post-processing the matrix by a smooth tracking method; s05: calculating the coordinates of the virtual object under a camera coordinate system, and drawing a three-dimensional graph to generate a virtual graph frame; s06: a composite video frame of the augmented reality environment is obtained and output to a display screen. According to the method, the corresponding smooth transformation matrix is obtained through description of the feature points on the picture, the smooth tracking technology is adopted to track the features of the picture, and the weak texture features on the picture can be identified, so that the jitter and instability of virtual object tracking are obviously reduced, and the experience of augmented reality can be better improved.

Description

Smooth tracking method based on augmented reality
Technical Field
The invention relates to the technical field of augmented reality, in particular to a smooth tracking method based on augmented reality.
Background
Augmented reality is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, is a new technology for seamlessly integrating real world information and virtual world information, aims to sleeve the virtual world on a screen in the real world and interact with the real world, and provides an important role for processing pictures and videos.
At present, for a label-free augmented reality method for image recognition and tracking, feature points of a recognition image are usually extracted to serve as a reference for virtual object registration, so that the recognition image needs to have higher texture characteristics, and when the texture of the recognition image is weaker, the virtual object can generate phenomena of unstable tracking and jitter, so that a user cannot obtain better augmented reality experience.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a smooth tracking method based on augmented reality.
In order to achieve the purpose, the invention adopts the following technical scheme: a smooth tracking method based on augmented reality comprises the following steps:
s01: extracting a feature descriptor of a picture to be identified;
s02: processing the captured video frame image to find an identification picture;
s03: calculating a transformation matrix from the marker coordinate system to the camera coordinate system;
s04: post-processing the matrix by a smooth tracking method;
s05: calculating the coordinates of the virtual object under a camera coordinate system, and drawing a three-dimensional graph to generate a virtual graph frame;
s06: a composite video frame of the augmented reality environment is obtained and output to a display screen.
As a further description of the above technical solution:
in step S01, feature point detection is performed on each image captured by the camera using an invariance descriptor ORB feature point detector, and each feature point is described using an invariance ORB descriptor.
As a further description of the above technical solution:
in step S02, the processing of the captured video frame image includes the following steps:
s02.1: carrying out image gray level processing and binarization processing on the video frame image;
s02.2: carrying out image marking on the video frame image;
s02.3: and carrying out contour extraction on the video frame image to obtain an identification picture.
As a further description of the above technical solution:
in step S03, the transform matrix M is obtained by calculating each frame of video data stream, and performing rotation transform R and translation transform T.
As a further description of the above technical solution:
in step S04, the post-processing of the matrix by the smooth tracking method further includes the following steps:
s04.1: calculating the average value of N transformation matrixes, processing the obtained transformation matrix M, and storing the obtained N transformation matrixes M in an array, wherein the array is MtThen M is calculated by the following formulaaveThe average value obtained by matrix addition of n transformation matrices M is shown, and the calculation formula is:
Figure BDA0002492314270000031
s04.2: and taking the absolute value of the transformation matrix obtained by calculating the average value and the next frame, wherein the calculation mode of the absolute value is as follows: initializing variables i, j, delta _ times, Δ, t;
Figure BDA0002492314270000032
if(delta_times>t)Mt+1=Mavewhere Δ is a threshold, | Mave[i][j]-Mt+1[i][j]I is the absolute value of the transformation matrix;
s04.3: calculating the number of times the absolute value exceeds the threshold, calculating the absolute value | Mave[i][j]-Mt+1[i][j]And | and a threshold Δ, and calculating the number of excesses, and recording as: delta _ times, if (delta _ times)>t)Mt+1=MaveAnd t represents the number of times the threshold value Δ is exceeded.
S04.4: comparing the times of exceeding the threshold with the times of setting the threshold, and when the delta _ times is greater than the set t, selecting the average value M of the calculated matrix from the transformation matrix of the next frameave
If the value is not larger than the set t, the change of the transformation matrix under the frame is smooth, and the transformation matrix of the next frame is still the original Mt+1
As a further description of the above technical solution:
in step S05, the drawing of the three-dimensional graphics to generate the virtual graphics frame includes the steps of:
s05.1: acquiring corresponding marker vertex coordinates in a marker coordinate system and an image coordinate system, and acquiring a coding value by adopting a two-dimensional visual partial code;
s05.2: retrieving the three-dimensional model corresponding to the code to obtain a vertex array of the three-dimensional model;
s05.3: multiplying the vertex in the vertex array by the change matrix to obtain a coordinate array under a camera coordinate system;
s05.4: and storing the three-dimensional image in a frame buffer to generate a virtual image frame.
As a further description of the above technical solution:
in step S06, the synthesized video frame is obtained by synthesizing the obtained virtual graphics frame with the video frame of the two-dimensional visual coding woven fabric through the virtual-real synthesizing module.
Advantageous effects
The invention provides a smooth tracking method based on augmented reality. The method has the following beneficial effects:
(1): according to the smooth tracking method, the corresponding smooth transformation matrix is obtained through description of the feature points on the picture and is used as the mapping matrix of the virtual object of the next frame, the smooth tracking technology is adopted to track the features of the picture, the weak texture features on the picture can be identified, the smooth tracking effect of the virtual object can be seen, the shaking and instability phenomena of virtual object tracking are obviously reduced, and the experience of augmented reality can be better improved.
Drawings
FIG. 1 is a schematic flow chart of a smooth tracking method based on augmented reality according to the present invention;
FIG. 2 is a flow chart of the post-processing of the smooth tracking method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
As shown in fig. 1-2, a smooth tracking method based on augmented reality: the method comprises the following steps:
s01: extracting a feature descriptor of a picture to be identified;
s02: processing the captured video frame image to find an identification picture;
s03: calculating a transformation matrix from the marker coordinate system to the camera coordinate system;
s04: post-processing the matrix by a smooth tracking method;
s05: calculating the coordinates of the virtual object under a camera coordinate system, and drawing a three-dimensional graph to generate a virtual graph frame;
s06: a composite video frame of the augmented reality environment is obtained and output to a display screen.
In step S01, feature point detection is performed on each image taken by the camera using the invariance descriptor ORB feature point detector, and each feature point is described using the invariance descriptor ORB.
In step S02, the processing of the captured video frame image includes the following steps:
s02.1: carrying out image gray level processing and binarization processing on the video frame image;
s02.2: carrying out image marking on the video frame image;
s02.3: and carrying out contour extraction on the video frame image to obtain an identification picture.
In step S03, the transform matrix M is obtained by calculating each frame of video data stream, and performing rotation transform R and translation transform T.
In step S04, the post-processing of the matrix by the smooth tracking method further includes the following steps:
s04.1: calculating the average value of N transformation matrixes, processing the obtained transformation matrix M, and storing the obtained N transformation matrixes M in an array, wherein the array is MtThen M is calculated by the following formulaaveThe average value obtained by matrix addition of n transformation matrices M is shown, and the calculation formula is:
Figure BDA0002492314270000061
s04.2: and taking the absolute value of the transformation matrix obtained by calculating the average value and the next frame, wherein the calculation mode of the absolute value is as follows: initializing variables i, j, delta _ times, Δ, t;
Figure BDA0002492314270000062
if(delta_times>t)Mt+1=Mavewhere Δ is a threshold, | Mave[i][j]-Mt+1[i][j]I is the absolute value of the transformation matrix;
s04.3: calculating the number of times the absolute value exceeds the threshold, calculating the absolute value | Mave[i][j]-Mt+1[i][j]And | and a threshold Δ, and calculating the number of excesses, and recording as: delta _ times, if (delta _ times)>t)Mt+1=MaveAnd t represents the number of times the threshold value Δ is exceeded.
S04.4: comparing the times of exceeding the threshold with the times of setting the threshold, and when the delta _ times is greater than the set t, selecting the average value M of the calculated matrix from the transformation matrix of the next frameave
If the value is not larger than the set t, the change of the transformation matrix under the frame is smooth, and the transformation matrix of the next frame is still the original Mt+1
In step S05, the drawing of the three-dimensional graphics generating virtual graphics frame includes the steps of:
s05.1: acquiring corresponding marker vertex coordinates in a marker coordinate system and an image coordinate system, and acquiring a coding value by adopting a two-dimensional visual partial code;
s05.2: retrieving the three-dimensional model corresponding to the code to obtain a vertex array of the three-dimensional model;
s05.3: multiplying the vertex in the vertex array by the change matrix to obtain a coordinate array under a camera coordinate system;
s05.4: and storing the three-dimensional image in a frame buffer to generate a virtual image frame.
In step S06, the synthesized video frame is obtained by synthesizing the obtained virtual graphics frame with the video frame of the two-dimensional visual coding woven fabric through the virtual-real synthesizing module.
As an example of the above embodiment:
assuming that the obtained transformation matrix M is a 3 × 4 matrix, the specific characteristics of the matrix are as follows:
Figure BDA0002492314270000071
the information contained in the transformation matrix comprises a rotation transformation R and a translation transformation T;
by the formula
Figure BDA0002492314270000072
Calculating an average value obtained by adding n transformation matrixes M, wherein n is the number of arrays to be stored, the larger the value is, the more the average value under the overall change is considered, n can be 3-7 generally, after traversing two cycles, taking the average value and the absolute value | M of the transformation matrix obtained by calculating the next frameave[i][j]-Mt+1[i][j]If the absolute value is larger than a threshold value delta, the threshold value delta can be usually 0.001-0.01, if smaller, the transformation matrix M for the next frame is illustratedt+1Is smaller if the absolute value | Mave[i][j]-Mt+1[i][j]If the calculation exceeds the threshold value delta, counting delta _ times, and the delta _ times is maximum 12;
if(delta_times>t)Mt+1=Mave(ii) a On the upper partT represents the set number of times of exceeding the threshold value delta, and can be 2 to 12 in general;
the whole sentence shows that when delta _ times is larger than the set t, the transformation matrix M of the next frame selects the average value M of the calculated matrixave(ii) a If the value is not larger than the set t, the change of the transformation matrix under the frame is smooth, and the transformation matrix of the next frame is still the original Mt+1By using the smooth tracking method, the obtained smooth transformation matrix can be used as the mapping matrix of the virtual object of the next frame, and the smooth tracking effect of the virtual object can be seen in a clearing way, so that the experience effect of a user is improved.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (7)

1. A smooth tracking method based on augmented reality is characterized in that: the method comprises the following steps:
s01: extracting a feature descriptor of a picture to be identified;
s02: processing the captured video frame image to find an identification picture;
s03: calculating a transformation matrix from the marker coordinate system to the camera coordinate system;
s04: post-processing the matrix by a smooth tracking method;
s05: calculating the coordinates of the virtual object under a camera coordinate system, and drawing a three-dimensional graph to generate a virtual graph frame;
s06: a composite video frame of the augmented reality environment is obtained and output to a display screen.
2. The augmented reality-based smooth tracking method according to claim 1, wherein in step S01, an invariance descriptor ORB feature point detector is used to perform feature point detection on each image captured by the camera, and an invariance descriptor ORB descriptor is used to describe each feature point.
3. The augmented reality-based smooth tracking method according to claim 1, wherein the step S02 of processing the captured video frame image comprises the following steps:
s02.1: carrying out image gray level processing and binarization processing on the video frame image;
s02.2: carrying out image marking on the video frame image;
s02.3: and carrying out contour extraction on the video frame image to obtain an identification picture.
4. The augmented reality-based smooth tracking method according to claim 1, wherein in step S03, the transformation matrix M is obtained by calculating each frame of video data stream and performing a rotation transformation R and a translation transformation T.
5. The augmented reality-based smooth tracking method according to claim 1, wherein in step S04, the smooth tracking method post-processing the matrix further comprises the following steps:
s04.1: calculating the average value of N transformation matrixes, processing the obtained transformation matrix M, and storing the obtained N transformation matrixes M in an array, wherein the array is MtThen M is calculated by the following formulaaveThe average value obtained by adding n transformation matrixes M is represented by the following calculation formula:
Figure FDA0002492314260000021
s04.2: and taking the absolute value of the transformation matrix obtained by calculating the average value and the next frame, wherein the calculation mode of the absolute value is as follows: initializing variables i, j, delta _ times, Δ, t;
Figure FDA0002492314260000022
if(delta_times>t)Mt+1=Mavewhere Δ is a threshold, | Mave[i][j]-Mt+1[i][j]I is the absolute value of the transformation matrix;
s04.3: calculating the number of times the absolute value exceeds the threshold, calculating the absolute value | Mave[i][j]-Mt+1[i][j]And | and a threshold Δ, and calculating the number of excesses, and recording as: delta _ times, if (delta _ times)>t)Mt+1=MaveAnd t represents the number of times the threshold value Δ is exceeded.
S04.4: comparing the times of exceeding the threshold with the times of setting the threshold, and when the delta _ times is greater than the set t, selecting the average value M of the calculated matrix from the transformation matrix of the next frameave
If the value is not larger than the set t, the change of the transformation matrix under the frame is smooth, and the transformation matrix of the next frame is still the original Mt+1
6. The augmented reality-based smooth tracking method according to claim 1, wherein in the step S05, the drawing the three-dimensional graphics to generate the virtual graphics frame includes the following steps:
s05.1: acquiring corresponding marker vertex coordinates in a marker coordinate system and an image coordinate system, and acquiring a coding value by adopting a two-dimensional visual partial code;
s05.2: retrieving the three-dimensional model corresponding to the code to obtain a vertex array of the three-dimensional model;
s05.3: multiplying the vertex in the vertex array by the change matrix to obtain a coordinate array under a camera coordinate system;
s05.4: and storing the three-dimensional image in a frame buffer to generate a virtual image frame.
7. The augmented reality-based smooth tracking method according to claim 1, wherein in step S06, the synthesized video frame is obtained by synthesizing the obtained virtual graphics frame with a video frame of a two-dimensional visual coding fabric through a virtual-real synthesizing module.
CN202010408715.6A 2020-05-14 2020-05-14 Smooth tracking method based on augmented reality Pending CN113673283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010408715.6A CN113673283A (en) 2020-05-14 2020-05-14 Smooth tracking method based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010408715.6A CN113673283A (en) 2020-05-14 2020-05-14 Smooth tracking method based on augmented reality

Publications (1)

Publication Number Publication Date
CN113673283A true CN113673283A (en) 2021-11-19

Family

ID=78537333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010408715.6A Pending CN113673283A (en) 2020-05-14 2020-05-14 Smooth tracking method based on augmented reality

Country Status (1)

Country Link
CN (1) CN113673283A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
CN103700069A (en) * 2013-12-11 2014-04-02 武汉工程大学 ORB (object request broker) operator-based reference-free video smoothness evaluation method
WO2014114118A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
CN105898594A (en) * 2016-04-22 2016-08-24 北京奇艺世纪科技有限公司 Virtual reality video playing control method and apparatus
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106339087A (en) * 2016-08-29 2017-01-18 上海青研科技有限公司 Eyeball tracking method based on multidimensional coordinate and device thereof
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN109685913A (en) * 2018-12-21 2019-04-26 西安电子科技大学 Augmented reality implementation method based on computer vision positioning
CN109859309A (en) * 2019-01-14 2019-06-07 仲恺农业工程学院 A kind of Internet of Things Teaching Information Processing System that realizing simulated teaching and method
CN109919971A (en) * 2017-12-13 2019-06-21 北京金山云网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110120101A (en) * 2019-04-30 2019-08-13 中国科学院自动化研究所 Cylindrical body augmented reality method, system, device based on 3D vision
CN110753181A (en) * 2019-09-29 2020-02-04 湖北工业大学 Video image stabilization method based on feature tracking and grid path motion
CN111062966A (en) * 2019-11-05 2020-04-24 东北大学 Method for optimizing camera tracking based on L-M algorithm and polynomial interpolation

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101505A (en) * 2006-07-07 2008-01-09 华为技术有限公司 Method and system for implementing three-dimensional enhanced reality
CN101404086A (en) * 2008-04-30 2009-04-08 浙江大学 Target tracking method and device based on video
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
WO2014114118A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
CN103700069A (en) * 2013-12-11 2014-04-02 武汉工程大学 ORB (object request broker) operator-based reference-free video smoothness evaluation method
CN105898594A (en) * 2016-04-22 2016-08-24 北京奇艺世纪科技有限公司 Virtual reality video playing control method and apparatus
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106339087A (en) * 2016-08-29 2017-01-18 上海青研科技有限公司 Eyeball tracking method based on multidimensional coordinate and device thereof
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN106952312A (en) * 2017-03-10 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 It is a kind of based on line feature describe without mark augmented reality register method
CN109919971A (en) * 2017-12-13 2019-06-21 北京金山云网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109685913A (en) * 2018-12-21 2019-04-26 西安电子科技大学 Augmented reality implementation method based on computer vision positioning
CN109859309A (en) * 2019-01-14 2019-06-07 仲恺农业工程学院 A kind of Internet of Things Teaching Information Processing System that realizing simulated teaching and method
CN110120101A (en) * 2019-04-30 2019-08-13 中国科学院自动化研究所 Cylindrical body augmented reality method, system, device based on 3D vision
CN110753181A (en) * 2019-09-29 2020-02-04 湖北工业大学 Video image stabilization method based on feature tracking and grid path motion
CN111062966A (en) * 2019-11-05 2020-04-24 东北大学 Method for optimizing camera tracking based on L-M algorithm and polynomial interpolation

Similar Documents

Publication Publication Date Title
KR102431117B1 (en) point cloud mapping
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN110799991B (en) Method and system for performing simultaneous localization and mapping using convolution image transformations
CN107330439B (en) Method for determining posture of object in image, client and server
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN101520904B (en) Reality augmenting method with real environment estimation and reality augmenting system
CN108629843B (en) Method and equipment for realizing augmented reality
CN101520849B (en) Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
CN106875437B (en) RGBD three-dimensional reconstruction-oriented key frame extraction method
GB2520338A (en) Automatic scene parsing
CN107689050B (en) Depth image up-sampling method based on color image edge guide
CN101551732A (en) Method for strengthening reality having interactive function and a system thereof
CN109711246B (en) Dynamic object recognition method, computer device and readable storage medium
CN109325444B (en) Monocular texture-free three-dimensional object posture tracking method based on three-dimensional geometric model
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN109829925B (en) Method for extracting clean foreground in matting task and model training method
CN114387346A (en) Image recognition and prediction model processing method, three-dimensional modeling method and device
CN112365516B (en) Virtual and real occlusion processing method in augmented reality
CN116721419A (en) Auxiliary labeling method combined with SAM (self-contained imaging) of visual large model
CN113673283A (en) Smooth tracking method based on augmented reality
CN201374082Y (en) Augmented reality system based on image unique point extraction and random tree classification
Shibata et al. Unified image fusion framework with learning-based application-adaptive importance measure
CN115273080A (en) Lightweight visual semantic odometer method for dynamic scene
Zhang et al. A multiple camera system with real-time volume reconstruction for articulated skeleton pose tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination