CN116664895B - Image and model matching method based on AR/AI/3DGIS technology - Google Patents

Image and model matching method based on AR/AI/3DGIS technology Download PDF

Info

Publication number
CN116664895B
CN116664895B CN202310936663.3A CN202310936663A CN116664895B CN 116664895 B CN116664895 B CN 116664895B CN 202310936663 A CN202310936663 A CN 202310936663A CN 116664895 B CN116664895 B CN 116664895B
Authority
CN
China
Prior art keywords
coordinates
3dgis
object model
perspective
overlook
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310936663.3A
Other languages
Chinese (zh)
Other versions
CN116664895A (en
Inventor
张志伟
王帅
刘永瑄
田立业
李瑞先
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaxia Tianxin Iot Technology Co ltd
Original Assignee
Huaxia Tianxin Iot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaxia Tianxin Iot Technology Co ltd filed Critical Huaxia Tianxin Iot Technology Co ltd
Priority to CN202310936663.3A priority Critical patent/CN116664895B/en
Publication of CN116664895A publication Critical patent/CN116664895A/en
Application granted granted Critical
Publication of CN116664895B publication Critical patent/CN116664895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a matching method of an image and a model based on an AR/AI/3DGIS technology, which comprises the steps of obtaining video image data and identifying an object model in the video image data; binding the identified object model with the object with known management information; converting the image coordinates of the object model into 3DGIS overlook coordinates; determining coordinate conversion parameters according to the 3DGIS overlook coordinates of the object model and the real object position coordinates; converting the position coordinates of the real object into 3DGIS overlook coordinates according to the coordinate conversion parameters; converting the 3DGIS overlook coordinates of the real object position into perspective transformation coordinates by adopting a perspective coordinate transformation method; and marking attribute information of the real object in the video image according to the perspective transformation coordinates of the position of the real object. The invention realizes the association matching between the object model information and the management information in the AR scene, and further can realize the attribute labeling and detailed information query of any object in the AR scene.

Description

Image and model matching method based on AR/AI/3DGIS technology
Technical Field
The invention relates to the technical field of image recognition, in particular to a matching method of an image and a model based on an AR/AI/3DGIS technology.
Background
Augmented Reality (AR) technology is a technology that smartly merges virtual information with the real world, and application of the technology can better display various information in a scene, but there are some technical problems, such as the type of object model can be identified, but management information of an object cannot be obtained.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a matching method of an image and a model based on an AR/AI/3DGIS technology.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
a matching method of an image and a model based on AR/AI/3DGIS technology comprises the following steps:
s1, acquiring video image data, and identifying an object model in the video image data;
s2, binding the identified object model with an object with known management information;
s3, converting the image coordinates of the object model into 3DGIS overlook coordinates;
s4, determining coordinate conversion parameters according to the 3DGIS overlook coordinates of the object model and the real object position coordinates;
s5, converting the position coordinates of the real object into 3DGIS overlook coordinates according to the coordinate conversion parameters;
s6, converting the 3DGIS overlook coordinates of the real object position into perspective transformation coordinates by adopting a perspective coordinate transformation method;
s7, marking attribute information of the real object in the video image according to perspective transformation coordinates of the position of the real object.
Further, the step S3 specifically includes:
and carrying out coordinate transformation on a camera for collecting video images, taking the position of the camera as an origin, directing the camera towards a positive Z axis, projecting the vertex coordinates of the object model in the view onto a view plane through a perspective transformation matrix, and converting the converted perspective coordinates into homogeneous coordinates to obtain the 3DGIS overlook coordinates of the vertex of the object model.
Further, the perspective transformation matrix specifically includes:
wherein ,T per in order to perspective the transformation matrix,din order to be a visual distance,is the aspect ratio of the view plane.
Further, the perspective coordinates specifically include:
wherein ,Lis perspective coordinates of the vertex of the object modelx,y,z) For the vertex coordinates of the object model,din order to be a visual distance,is the aspect ratio of the view plane.
Further, the 3DGIS top coordinates of the object model vertices are specifically:
wherein ,Pis the overlooking coordinate of 3DGIS of the vertex of the object modelx,y,z) For the vertex coordinates of the object model,din order to be a visual distance,is the aspect ratio of the view plane.
Further, the step S4 specifically includes:
acquiring 3DGIS overlooking coordinates and real object position coordinates of a plurality of groups of object models;
constructing a coordinate conversion equation;
and solving a coordinate conversion equation by utilizing the 3DGIS overlook coordinates of the multiple groups of object models and the real object position coordinates to obtain coordinate conversion parameters.
Further, the coordinate transformation equation is specifically:
wherein ,for the 3DGIS top view coordinates of the object model, < >>Is the position coordinates of the real object,X,YAs an amount of the offset to be used,Aas a result of the rotation factor,pis a scale factor.
The invention has the following beneficial effects:
according to the invention, the object model in the AR video image is identified, and the object model and the real object coordinates in the 3DGIS scene are mutually converted, so that the attribute information of the real object can be marked on the AR video image, the association matching between the object model information in the AR scene and the management information is realized, and further the attribute marking and detailed information query of any object in the AR scene can be realized.
Drawings
FIG. 1 is a flow chart of a method for matching an image with a model based on the AR/AI/3DGIS technique;
FIG. 2 is a top view of the 3D system in the left-hand coordinate system of the present invention;
FIG. 3 is a side view of a 3D system of the present invention;
fig. 4 is a side view of a view plane of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a method for matching an image with a model based on AR/AI/3DGIS technology, comprising steps S1 to S7 of:
s1, acquiring video image data, and identifying an object model in the video image data;
in an alternative embodiment of the invention, the invention firstly obtains video image data through the camera, and accesses the video stream data of the camera into the server according to a video protocol.
And then labeling the data set through a labeling platform to obtain a labeled image data set. After the image labeling is completed, the data set is manufactured into a VOC data set format, and the distribution proportion of the training set and the verification set is divided into: 5:1. The VOC-formatted data set structure is as follows:
the VOC 2007-JPEGImages store the picture files to be labeled-the Annogens store the labeled label files-the predefined_classification.txt defines all the categories to be labeled by themselves.
And finally, constructing a YOLOV5s target detection network to identify an object model in the video image data. The YOLOV5s target detection network structure is generally divided into three layers, namely a back bone layer, a rock layer and a Head layer; the back bone layer mainly comprises Focus, CBS, bottleneckCSP (/ C3), SPP and the like, and is used for extracting features; the Neck layer is the design of the FPN+PAN structure, and has the function of mixing and combining the features and transmitting the features to the prediction layer; the Head layer is an output layer, and functions to perform final prediction output. The YOLOV5s target detection network is built through a Pytorch deep learning framework and model training is performed. In the model training process, different parameters such as learning rate, batch_size, mix up, random training, mosaic and the like need to be properly adjusted according to actual conditions. In the model training process, different training strategies are used to promote rapid decline of the model loss, rapid rise of the precision and the like, and common training strategies are as follows: warmup, autoanchor, freeze translation, multi-scale translation, etc.
The training of the steps is performed to obtain a model file, and the characteristics in the data set learned by training are stored in the file. Extracting feature points of the object from the acquired image, and then matching the feature points with the feature of the data set obtained by training to obtain the type of the object.
S2, binding the identified object model with an object with known management information;
in an optional embodiment of the present invention, the object model identified in step S1 is bound with the object with known management information, so as to establish a one-to-one correspondence between the object model and the object with known management information, and the object model information in the AR scene and the management information are associated through matching of the AR image and the 3DGIS scene.
S3, converting the image coordinates of the object model into 3DGIS overlook coordinates;
in an alternative embodiment of the present invention, step S3 specifically includes:
and carrying out coordinate transformation on a camera for collecting video images, taking the position of the camera as an origin, directing the camera towards a positive Z axis, projecting the vertex coordinates of the object model in the view onto a view plane through a perspective transformation matrix, and converting the converted perspective coordinates into homogeneous coordinates to obtain the 3DGIS overlook coordinates of the vertex of the object model.
Specifically, the present embodiment performs coordinate transformation with respect to a camera that captures a video image, with the camera position as the origin, and with the camera oriented toward the positive Z-axis, as shown in fig. 2.
Fig. 2 is a top view of the 3D system in the left-hand coordinate system, the field of view of the camera is 90 degrees, the vertex of the object in the view needs to be projected onto the view plane to complete perspective transformation, and the coordinate of the projection of the vertex on the view plane can be calculated by determining the view distance D, as shown in fig. 3.
Fig. 3 is a side view of a 3D system, as may be found under the YOZ plane according to the similar triangle theorem:
the same principle can be obtained:
to sum up, the available viewpoints are located at (0, 0, 0), and the projection of the object vertices (x, y, z) when the view plane is z=d is transformed into:
transformation can be done by matrix operations:
dividing all components bySince only x and y are needed, the value of z is not considered, and the above matrix is taken as a perspective transformation matrix Tper as:
since the width of the viewing plane is generally taken to be 2, the coordinate range is (-1, 1), and the camera field of view is 90 degrees, the d value can be found to be 1, and the transformation matrix is then:
the x range of the perspective coordinate obtained after matrix operation is (-1, 1), and the y range is (-1, 1).
This is the case where the camera view is 90 degrees, the view plane is square, and the screen/viewport is also square, and when the general case needs to be considered, i.e. the screen/viewport is not square, the aspect ratio needs to be introduced, if the aspect ratio is not considered in the perspective transformation, the subsequent screen coordinate transformation needs to be considered, and if the resulting graphics are not processed, the scaling will occur.
Taking the screen/viewport 600x400 as an example, the aspect ratio aspect_ratio is 3:2, and the camera view isThe view plane is 2 x 2/aspect_ratio, thereby ensuring that the view plane and the aspect ratio of the screen are consistent, as shown in fig. 4.
Since the viewing surface width w=2, it is possible to obtain
Perspective coordinate transformation can be performed according to the viewing distance d, the x coordinate range of the vertex after projection is (-1, 1), the y coordinate range is (-1/aspect_ratio, 1/aspect_ratio), but the aspect ratio is not considered in the perspective coordinate operation process, and the y component is multiplied by the aspect ratio, so that the y coordinate range (-1, 1) is normalized, and the aspect ratio is not considered in the subsequent screen coordinate transformation process. The transformation formula is thus obtained as follows:
the perspective transformation matrix Tper is:
the vertex (x, y, z) is subjected to perspective matrix operation:
and finally, converting the calculation result into homogeneous coordinates, dividing all components by z, and obtaining the 3DGIS top view coordinates of the object model vertexes, wherein the 3DGIS top view coordinates are as follows:
s4, determining coordinate conversion parameters according to the 3DGIS overlook coordinates of the object model and the real object position coordinates;
in an alternative embodiment of the present invention, step S4 specifically includes:
acquiring 3DGIS overlooking coordinates and real object position coordinates of a plurality of groups of object models;
constructing a coordinate conversion equation;
and solving a coordinate conversion equation by utilizing the 3DGIS overlook coordinates of the multiple groups of object models and the real object position coordinates to obtain coordinate conversion parameters.
Specifically, in this embodiment, first, the 3DGIS top coordinates of at least two sets of object models and the real object position coordinates of the real object position model are obtained through step S3;
then, a coordinate conversion equation is constructed as follows:
wherein ,for the 3DGIS top view coordinates of the object model, < >>As the position coordinates of the real object,X,Yas an amount of the offset to be used,Aas a result of the rotation factor,pis a scale factor.
Finally, solving the constructed coordinate conversion equation according to the 3DGIS overlook coordinates of at least two groups of object models and the real object position coordinates to obtain coordinate conversion parameters, namely offsetX,YTwiddle factorAScale factorp
S5, converting the position coordinates of the real object into 3DGIS overlook coordinates according to the coordinate conversion parameters;
in an alternative embodiment of the present invention, after the coordinate conversion parameters are obtained according to step S4, the real object position coordinates are substituted into the coordinate conversion equation, so that the real object position coordinates can be converted into corresponding 3DGIS top coordinates.
S6, converting the 3DGIS overlook coordinates of the real object position into perspective transformation coordinates by adopting a perspective coordinate transformation method;
in an alternative embodiment of the present invention, according to the 3DGIS overlook coordinates of the real object position obtained by the conversion in step S5, the present embodiment performs the perspective coordinate inverse transformation by using the perspective transformation matrix, so as to obtain perspective transformation coordinates of the real object position, thereby implementing the mutual conversion between the object model and the real object coordinates in the 3DGIS scene.
S7, marking attribute information of the real object in the video image according to perspective transformation coordinates of the position of the real object.
In an optional embodiment of the present invention, according to the perspective transformation coordinates of the real object position obtained in step S6, the attribute information of the real object may be marked on the perspective transformation coordinates of the real object on the video image, so as to implement attribute marking and detailed information query of any object in the AR scene.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principles and embodiments of the present invention have been described in detail with reference to specific examples, which are provided to facilitate understanding of the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (1)

1. The image and model matching method based on the AR/AI/3DGIS technology is characterized by comprising the following steps:
s1, acquiring video image data, and identifying an object model in the video image data;
s2, binding the identified object model with an object with known management information;
s3, converting the image coordinates of the object model into 3DGIS overlook coordinates; the method specifically comprises the following steps:
the method comprises the steps of carrying out coordinate transformation on a camera for collecting video images, taking the position of the camera as an origin, enabling the camera to face to a positive Z axis, projecting vertex coordinates of an object model in a view onto a view plane through a perspective transformation matrix, and converting the converted perspective coordinates into homogeneous coordinates to obtain 3DGIS overlooking coordinates of the vertex of the object model;
the perspective transformation matrix specifically comprises:
wherein ,T per in order to perspective the transformation matrix,din order to be a visual distance,an aspect ratio that is the view plane;
the perspective coordinates are specifically as follows:
wherein ,Lis perspective coordinates of the vertex of the object modelx,y,z) For the vertex coordinates of the object model,din order to be a visual distance,an aspect ratio that is the view plane;
the 3DGIS top view coordinates of the object model vertex are specifically as follows:
wherein ,Pis the overlooking coordinate of 3DGIS of the vertex of the object modelx,y,z) For the vertex coordinates of the object model,din order to be a visual distance,an aspect ratio that is the view plane;
s4, determining coordinate conversion parameters according to the 3DGIS overlook coordinates of the object model and the real object position coordinates; the method specifically comprises the following steps:
acquiring 3DGIS overlooking coordinates and real object position coordinates of a plurality of groups of object models;
constructing a coordinate conversion equation;
solving a coordinate conversion equation by utilizing the 3DGIS overlook coordinates of the multiple groups of object models and the real object position coordinates to obtain coordinate conversion parameters;
the coordinate conversion equation is specifically as follows:
wherein ,for the 3DGIS top view coordinates of the object model, < >>As the position coordinates of the real object,X,Yas an amount of the offset to be used,Aas a result of the rotation factor,pis a scale factor;
s5, converting the position coordinates of the real object into 3DGIS overlook coordinates according to the coordinate conversion parameters;
s6, converting the 3DGIS overlook coordinates of the real object position into perspective transformation coordinates by adopting a perspective coordinate transformation method;
s7, marking attribute information of the real object in the video image according to perspective transformation coordinates of the position of the real object.
CN202310936663.3A 2023-07-28 2023-07-28 Image and model matching method based on AR/AI/3DGIS technology Active CN116664895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310936663.3A CN116664895B (en) 2023-07-28 2023-07-28 Image and model matching method based on AR/AI/3DGIS technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310936663.3A CN116664895B (en) 2023-07-28 2023-07-28 Image and model matching method based on AR/AI/3DGIS technology

Publications (2)

Publication Number Publication Date
CN116664895A CN116664895A (en) 2023-08-29
CN116664895B true CN116664895B (en) 2023-10-03

Family

ID=87724578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310936663.3A Active CN116664895B (en) 2023-07-28 2023-07-28 Image and model matching method based on AR/AI/3DGIS technology

Country Status (1)

Country Link
CN (1) CN116664895B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103795976A (en) * 2013-12-30 2014-05-14 北京正安融翰技术有限公司 Full space-time three-dimensional visualization method
CN106791784A (en) * 2016-12-26 2017-05-31 深圳增强现实技术有限公司 Augmented reality display methods and device that a kind of actual situation overlaps
CN112085804A (en) * 2020-08-21 2020-12-15 东南大学 Object pose identification method based on neural network
JP2022055100A (en) * 2020-09-28 2022-04-07 セイコーエプソン株式会社 Control method, control device and robot system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103795976A (en) * 2013-12-30 2014-05-14 北京正安融翰技术有限公司 Full space-time three-dimensional visualization method
CN106791784A (en) * 2016-12-26 2017-05-31 深圳增强现实技术有限公司 Augmented reality display methods and device that a kind of actual situation overlaps
CN112085804A (en) * 2020-08-21 2020-12-15 东南大学 Object pose identification method based on neural network
JP2022055100A (en) * 2020-09-28 2022-04-07 セイコーエプソン株式会社 Control method, control device and robot system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维场景点云理解与重建技术;张志伟;《中国图像图形学报》;第28卷(第6期);第1742-1764页 *

Also Published As

Publication number Publication date
CN116664895A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
JP6789402B2 (en) Method of determining the appearance of an object in an image, equipment, equipment and storage medium
CN108509848B (en) The real-time detection method and system of three-dimension object
WO2018119889A1 (en) Three-dimensional scene positioning method and device
JP2019057248A (en) Image processing system, image processing device, image processing method and program
CN104715479A (en) Scene reproduction detection method based on augmented virtuality
CN109711472B (en) Training data generation method and device
CN109887030A (en) Texture-free metal parts image position and posture detection method based on the sparse template of CAD
US20170301110A1 (en) Producing three-dimensional representation based on images of an object
CN110648274B (en) Method and device for generating fisheye image
CN108492017B (en) Product quality information transmission method based on augmented reality
CN106023307B (en) Quick reconstruction model method based on site environment and system
JP2014112055A (en) Estimation method for camera attitude and estimation system for camera attitude
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
CN109934873B (en) Method, device and equipment for acquiring marked image
CN112581632B (en) House source data processing method and device
WO2022021782A1 (en) Method and system for automatically generating six-dimensional posture data set, and terminal and storage medium
CN112712487A (en) Scene video fusion method and system, electronic equipment and storage medium
WO2019012632A1 (en) Recognition processing device, recognition processing method, and program
CN112669436A (en) Deep learning sample generation method based on 3D point cloud
CN112802208B (en) Three-dimensional visualization method and device in terminal building
CN104732560A (en) Virtual camera shooting method based on motion capture system
WO2024088071A1 (en) Three-dimensional scene reconstruction method and apparatus, device and storage medium
Lee et al. Real time 3D avatar for interactive mixed reality
CN116664895B (en) Image and model matching method based on AR/AI/3DGIS technology
CN115063485B (en) Three-dimensional reconstruction method, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant