CN115082648B - Marker model binding-based AR scene arrangement method and system - Google Patents

Marker model binding-based AR scene arrangement method and system Download PDF

Info

Publication number
CN115082648B
CN115082648B CN202211009755.9A CN202211009755A CN115082648B CN 115082648 B CN115082648 B CN 115082648B CN 202211009755 A CN202211009755 A CN 202211009755A CN 115082648 B CN115082648 B CN 115082648B
Authority
CN
China
Prior art keywords
scene
model
marker
target
target scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211009755.9A
Other languages
Chinese (zh)
Other versions
CN115082648A (en
Inventor
宋广华
冯恩泽
王朋
张晓刚
许强
隆龙
王光永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haikan Network Technology Shandong Co ltd
Original Assignee
Haikan Network Technology Shandong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haikan Network Technology Shandong Co ltd filed Critical Haikan Network Technology Shandong Co ltd
Priority to CN202211009755.9A priority Critical patent/CN115082648B/en
Publication of CN115082648A publication Critical patent/CN115082648A/en
Application granted granted Critical
Publication of CN115082648B publication Critical patent/CN115082648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of image data processing, and particularly relates to an AR scene arrangement method and system based on marker model binding. When the scene is matched, the model copy is always fixed at the fixed position of the real scene, so that development can be released from scene arrangement, environment arrangement personnel can arrange the scene only by simple operation, and operation can be performed without programming knowledge.

Description

Marker model binding-based AR scene arrangement method and system
Technical Field
The invention relates to the technical field of image data processing, in particular to an AR scene arrangement method and system based on marker model binding in an augmented reality technology.
Background
The AR (Augmented Reality) is called as Augmented Reality technology, the AR is called as short for the application, the technology provides a solution for fusing a real world and a virtual world, the Augmented Reality technology enables a handheld mobile device display screen to display the fused image by fusing an image collected by a camera of the handheld mobile device and an image collected in a 3D virtual world, and the Augmented Reality technology provides a series of general basic components to meet the functions, such as a space anchor point component, a marker model binding component, a motion perception component, a physical component, an illumination component and the like.
The scene arrangement means that a developer can simultaneously display pre-placed models by placing the pre-made models at specific positions in corresponding scenes when the scenes are identified, a user can view the placed models through different angles by moving a handheld device, but the mode for placing the models has great limitation, when the developer places the models at the specific scenes, due to the limitation of multiple factors, the scenes where the models are placed cannot be combined with real scenes to be used as position reference, the pre-made models are placed according to the empirical distance of the developer, after the placement is completed every time, an operation program is operated to verify whether the placement positions of articles are correct, the models cannot be arranged in the identified scenes in an intuitive mode, and the following concepts need to be understood according to the reasons: component, world coordinate, local coordinate, camera.
The components are all nodes in the AR world, some nodes may be a model, each model has own coordinates, volume, rotation angle and the like, some nodes may be points without volume and provide a certain function, and some nodes may be empty nodes only for expressing parent-child relationships.
World coordinates and local coordinates are relative concepts, in the AR virtual world, relationships of all nodes are parent-child relationships based on a tree structure, if a parent node moves, rotates and scales, a child node performs the same transformation with the parent node, so that the coordinate of each child node relative to the parent node is a local coordinate, the parent node relative to the child node may also be a child node of a certain node, and so on, the coordinate of the parent node at the top level is the world coordinate, as shown in fig. 1, nodes 1 and 2 are primary parent nodes, the node 1 is a parent node, and the child node 1 and the child node 2 are arranged below the parent node, the child node 1 can be a secondary parent node, and the child node 1 and the child node 5 and the child node 6 are arranged below the child node 1.
The virtual camera is a node, which functions like a real camera, and maps a scene in the virtual world to an interface of a user, and when the user moves, the motion sensing function provided by the AR tool is mapped to the camera node in the virtual world, and at this time, the virtual camera also moves accordingly, and generally, the virtual camera represents the position, rotation angle, etc. of the user in the virtual world, as shown in fig. 2.
The scene sampling is a process that a user uses a camera of a handheld mobile device to continuously sample and store a target scene at multiple angles, when the scene is sampled, an AR basic component space anchor point component abstracts data information collected by the camera into one point according to a built-in algorithm, the collected points are more along with the continuous sampling of the camera, under normal conditions, the scene is more complex, the more sampling points are, the better the sampling quality is, the point is called an anchor point, all the anchor points are called clouds like the cloud due to the fact that the number of the anchor points is large, a virtual 3D space is generated in software according to the real environment in the scene sampling process, the zero coordinate of the 3D space is the coordinate of starting sampling, the coordinate is a 3-dimensional vector coordinate (0, 0), the sampled anchor point information is riveted in the 3-dimensional space, each anchor point has own coordinate, the coordinate is a local coordinate relative to the zero coordinate of the 3D space, and the AR basic component returns the unique scene ID of the applied anchor point to the scene through a set of the data of the scene cloud.
Under the general condition, a developer configures a stored scene in a scene loader provided by a space anchor point component to obtain a scene ID, the space anchor point component opens a camera of a mobile device, continuously scans a real environment image acquired by the camera to generate cloud anchor point information for matching with the stored scene, when the handheld mobile device moves to a region to be matched and is successfully matched with a corresponding scene, the space anchor point component establishes a 3D virtual space corresponding to the space anchor point in AR application, the 3D virtual space has a world coordinate, the 3D virtual space has a rivet relation with the real space, and the rivet relation is determined by the cloud anchor point generated during scene sampling.
The scene layout is the work of a developer for placing the models in the 3D virtual space, in the process, all the added models are child nodes of the 3D virtual space, and the developer can show the added models in the matched scene only by defining local coordinates of each model in the 3D virtual space. For example, a wax image exists in a real scene, the scene sampling function is used for sampling the scene, when a user wants to match the scene, a top hat sub-model is added to the wax image and displayed on a screen of a handheld mobile device, then the developer needs to know the coordinate position of the head of the wax image in the matched 3D virtual scene, but in the actual development process, the spatial anchor point component does not provide specific information of each anchor point of a cloud anchor point for the developer, and the developer cannot know which cloud representing the head of the wax image is, so that the developer cannot intuitively place a hat model to be placed on the head of the wax image by the anchor point. In general, a developer empirically defines a temporary local coordinate for a hat model, runs an AR application to check the position of the hat, modifies a program to adjust the position of the hat coordinate, checks whether the position is correct, and repeats the adjustment and checking until the hat is placed at the head position of the wax image in the virtual space as desired. This process is very complex and time consuming, requiring the developer to constantly try to modify the location of the model, especially when the number of models is large, which is extremely unfriendly to the developer.
Disclosure of Invention
The invention aims to provide an AR scene arrangement method based on marker model binding, wherein a scene arrangement module is added between scene sampling and scene matching, the marker model binding function provided by an AR tool is utilized in the scheme to assist a user in determining the position of a model to be placed, a developer does not need to participate in the work of model placement, the developer can be released from the scene arrangement, environment arrangement personnel can arrange the scene by simple operation, and programming knowledge is not needed.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method of marker model binding based AR scene placement, comprising:
s1, binding a marker and a model to be placed, and setting a collision detector for the model to be placed;
s2, sampling a target scene, and recording and storing an ID of the target scene;
s3, acquiring a target scene in scene arrangement and a root coordinate of a virtual world of the target scene;
s4, determining a position where a model is required to be placed in a target scene, identifying a marker to obtain a target model, and copying the target model to obtain a model copy, wherein the specific steps are that the marker is placed at a position corresponding to the position where the model is required to be placed in the target scene in a real scene, a mobile device identifies the marker, the positions of the models bound with the marker are displayed, the models have own world coordinates, own rotation parameters and scaling parameters, the target model is selected, the model copy is copied at the same position, and the model copy keeps the same position, rotation parameters and scaling parameters as the target model;
s5, binding the model copy with a target scene, performing coordinate conversion to change the model copy from a primary node to a single child node under the target scene, and storing the model copy;
s6, repeating the steps S4 and S5, placing different models at different positions in the target scene, and storing corresponding model copies;
s7, in the matching mode, the matched target scene is loaded, and simultaneously, the child node model copy bound with the target scene is also loaded;
and S8, the mobile device is held by an operator to move to a position in a real scene corresponding to the target scene, the mobile device detects that the scene matching is successful, the virtual space represented by the target scene is loaded and displayed, the plurality of model copies bound with the virtual space can be synchronously displayed, and the model copies are selected as required to carry out scene arrangement.
The AR scene arrangement system based on marker model binding comprises mobile equipment and AR application, wherein the AR application comprises an augmented reality basic component and a scene arrangement module, the augmented reality basic component comprises a marker identification and model binding component, an input component, an illumination component, a physical component, a virtual reality fusion component, a space anchor point component, a motion perception component, a rendering component and the like, the scene arrangement module comprises a scene management component, a marker model binding management component and a scene arrangement component, the scene management component is used for scene acquisition and scene deletion, the marker model binding management component is used for adding binding relations and deleting binding relations, and the scene arrangement component has the functions of scene matching, marker identification and model display, model position storage and model display.
The invention has the following beneficial effects: the method utilizes the functions of an image and model binding component and the like provided by an augmented reality basic component, the model has the own world coordinate, and simultaneously has the own rotation parameter and the scaling parameter, the model is only required to be copied at the same position, the position same as that of a target model is kept, the rotation and scaling parameters are kept, and then the copied model copy is required to be bound with a target scene, so that the model copy is changed from a primary node to a single child node under the target scene.
Drawings
Fig. 1 illustrates a corresponding relationship between a virtual scene and a real scene of a camera in the prior art.
Fig. 2 is a schematic diagram illustrating a corresponding relationship between a virtual world and a display world in the prior art.
Fig. 3 is a scene diagram of a virtual world in the prior art.
FIG. 4 is a flow chart of the present invention.
FIG. 5 is a system framework of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
According to the AR scene arrangement method based on marker model binding shown in fig. 4, the method includes:
s1, binding a marker and a model to be placed, setting a collision detector for the model to be placed, adopting augmented reality application, namely AR application for short, calling a marker model binding component by the AR application, binding the marker and the model to be placed, and setting the collision detector for the model to be placed, wherein the collision detector is a function of a general basic component in an AR application module, can detect collision time of the model configured with the collision detector and inform a developer, and usually uses a picture with higher identification as the marker, and in the step, the developer can bind different models and marker pictures as much as possible;
s2, sampling a target scene, recording and storing an ID of the target scene, enabling the AR application to enter a sampling mode, calling a space anchor point component by the AR application, performing continuous multi-angle sampling on the target scene by using a camera of handheld mobile equipment, calling a storage function of the space anchor point component after acquiring enough sampling points, recording the ID of the stored scene, and forming the target scene;
s3, acquiring a target scene in the scene arrangement and the root coordinates of the virtual world of the target scene, wherein the specific steps are as follows:
the AR application enters a scene arrangement mode, when the AR application enters the scene arrangement mode, the AR application creates a virtual camera node for mapping the position of the AR application in a virtual world, the coordinate of the virtual camera is the root coordinate (0, 0) of the virtual world, and the movement of the mobile equipment is mapped into the virtual camera of the virtual world;
the AR application calls a scene loading function of the space anchor point component, configures the ID of the target scene in the step S2, the space anchor point component loads cloud anchor point data related to the space anchor point component, calls a camera of the mobile device to continuously sample the surrounding environment, generates a cloud anchor point to be matched, and tries to match the cloud anchor point of the target scene;
an operator holds a mobile device running an AR application to move to a target scene position in a real scene, a space anchor point component detects that the scene matching is successful and sends a message of successful scene matching to a developer, wherein the message carries specific attribute information of the successfully matched scene, and the scene is marked as a target scene I;
s4, determining a position of a model to be placed in a target scene, identifying a marker to obtain a target model, copying the target model to obtain a model copy, wherein the specific steps are that the marker is placed at a position corresponding to the model to be placed in the target scene in a real scene, a mobile device identifies the marker, the position of each model bound with the marker is displayed, the model has the own world coordinates, the own rotation parameters and the own scaling parameters, the target model is selected and the model copy is copied at the same position, the model copy keeps the same position, the rotation parameters and the scaling parameters as the target model, and the position I is taken as an example for further explanation, and the specific operation is as follows:
the method comprises the steps that an operator needs to determine a position where a model is to be placed, the position is recorded as position one, the operator holds mobile equipment running AR application by hands to move a position near the position one, so that the position one appears in the camera view of the mobile equipment and is kept clear as much as possible;
an operator places the picture of the marker in the step S1 at a position of a first position, at this time, the AR application detects that marker information appears in the field of view of the camera, and a model related to the marker is displayed on a screen of the mobile device, is continuously bound with the marker and moves or rotates along with the marker;
an operator finely adjusts the position and the rotation angle of the marker picture to enable the position and the rotation angle of the model bound by the marker in the virtual world to be as close to a real scene as possible, and the model is recorded as a first model to be placed;
an operator clicks a first model to be placed displayed in a screen of a mobile device running an AR application to trigger an event notification function of a collision detector of the mobile device to inform the AR application that the model is selected, and the first model to be placed is in a selected state;
an operator clicks a storage button to trigger a storage event, the AR application copies a model with the same attribute as that of the model I to be placed, the model I is marked as a placed model I, the position of the placed model I is the same as that of the model I to be placed, and the selected state of the model I to be placed is cancelled, wherein the placed model I is the model copy;
s5, binding the model copy with a target scene, performing coordinate conversion to change the model copy from a primary node to a single child node under the target scene, and storing coordinates, rotation parameters and scaling parameters of the converted model copy, wherein taking the model copy with a model I placed therein as an example, the specific steps are as follows:
modifying the parent-child relationship attribute of the placed model I to modify the parent-child relationship attribute into a child node of the target scene I, and modifying the coordinate parameter of the placed model I;
because the father-son relationship is modified in the operation, the coordinate of the father-son relationship is changed into a local coordinate from a world coordinate, and coordinate conversion is needed to be carried out to ensure that the position of the father-son relationship is unchanged, and the coordinate conversion function of the augmented reality basic component is called to obtain the value of the local coordinate at the same position and assign the value to the placed model I;
storing the first target scene and the first set of child relationship data and coordinate data, the rotation parameters, the scaling parameters and the like of the placed model to proper positions, and storing the data to a local storage or a server for storage;
s6, repeating the steps S4 and S5 to place different models at different positions of the first target scene, and marking the models as a placed model II, a placed model III, a placed model IV and the like, wherein the placed model II and the placed model III are different model copies at different positions of the first target scene;
and S7, entering a matching mode, loading a target scene, and simultaneously loading a child node model copy bound with the target scene, namely in the matching mode, the AR application calls a scene loading function provided by the spatial anchor component, for example, configuring a scene ID to load target scene-cloud anchor data, and simultaneously loading stored model data including the parent-child relationship, the position, the rotation angle and the like among nodes, and binding the stored model I with the target scene I.
And S8, an operator holds the mobile device to move to a position in a real scene corresponding to the target scene, the mobile device detects that the scene matching is successful, the virtual space represented by the target scene is loaded and displayed, a plurality of model copies bound with the virtual space can be synchronously displayed, the model copies are selected as required to carry out scene arrangement, namely, the loaded target scene, the model copies and images collected by a camera of the handheld mobile device are fused and displayed on a user screen, and each model copy can ensure that the position in the virtual world can be locked with the position in the real scene, and scene arrangement is carried out.
The AR scene arrangement system based on marker model binding as shown in fig. 5 includes a mobile device and an AR application, where the AR application includes an augmented reality base component and a scene arrangement module, the augmented reality base component includes a marker recognition and model binding component, an input component, an illumination component, a physical component, a virtual reality fusion component, a spatial anchor component, a motion perception component, a rendering component, and the like, and the scene arrangement module includes a scene management component, a marker model binding management component, and a scene arrangement component, where the scene management component is used to perform scene collection and scene deletion, the marker model binding management component is used to add a binding relationship and delete a binding relationship, and the scene arrangement component has the functions of performing scene matching, marker recognition and model display, model position storage and model display.
Specifically, the space anchor point component mainly comprises a scene sampling function and a scene matching function, the scene sampling is a process of continuously sampling and storing a plurality of angles for a target scene through a camera of the handheld mobile device, and the scene matching function is used for continuously matching scene characteristics collected by the scene sampling through collecting images through the camera of the handheld mobile device so as to judge whether the images are in the range of the target scene.
Specifically, the marker identification and model binding component binds a marker model, usually uses pictures or photos as markers, a handheld mobile device continuously acquires image information of surrounding scenes through a camera and displays the image information on a screen, when the marker picture appears in the screen, the model bound with the marker picture is displayed above the marker picture in the screen of the mobile device, the process can keep tracking the marker picture, and when the marker picture moves and rotates, the bound model can move or rotate at the same time.
Specifically, the motion sensing component detects changes of images acquired by a gyroscope and a camera of the mobile device, analyzes a motion track of the mobile device, and the movement or rotation of the mobile device is mapped to the movement or rotation in the virtual world, so that the model or the scene in the virtual world can be observed at different angles through the movement of the mobile device, and the model or the scene in the virtual world can be observed at different angles.
Specifically, the physical components simulate physical effects in the real world in the virtual world, and comprise functions of collision simulation, motion simulation, speed calculation and the like.
In particular, the illumination assembly provides simulated illumination in a virtual world, including the functions of color, reflection, refraction, transmission, and the like of light.
The present invention is not limited to the above embodiments, and any structural changes made under the teaching of the present invention shall fall within the protection scope of the present invention, which is similar or similar to the technical solutions of the present invention.
The techniques, shapes, and configurations not described in detail in the present invention are all known techniques.

Claims (3)

1. An AR scene arrangement method based on marker model binding is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
s1, binding a marker and a model to be placed, and setting a collision detector for the model to be placed;
s2, sampling a target scene, and recording and storing an ID of the target scene;
s3, acquiring a target scene in scene arrangement and a root coordinate of a virtual world of the target scene;
s4, determining a position where a model is to be placed in a target scene, identifying a marker to obtain a target model, and copying the target model to obtain a model copy, wherein the specific steps are that the marker is placed at a position in a real scene corresponding to the position where the model is to be placed in the target scene, a mobile device identifies the marker, the position of each model bound with the marker is displayed and is continuously bound with the marker, the model has the world coordinates of the model and self rotation parameters and scaling parameters along with the movement or rotation of the marker, the position and the rotation angle of a marker picture are finely adjusted, so that the position and the rotation angle of the model bound with the marker in a virtual world are as close to the real scene as possible, the target model is selected and the model copy is copied at the same position, and the model copy keeps the same position, rotation parameters and scaling parameters as the target model;
s5, binding the model copy with a target scene, carrying out coordinate conversion to change the model copy from a primary node to a single child node under the target scene, and storing the model copy, wherein the coordinate conversion is realized by modifying a parent-child relationship attribute of the model copy, modifying the model copy to a child node of the target scene, modifying a coordinate parameter of the model copy, calling a coordinate conversion function of the augmented reality basic component, obtaining a value of a local coordinate at the same position, assigning the value to the model copy, and storing parent-child relationship data, coordinate data, a rotation parameter and a scaling parameter of the target scene and the model copy;
s6, repeating the steps S4 and S5, placing different models at different positions in the target scene, and storing corresponding model copies;
s7, in the matching mode, loading the matched target scene, and meanwhile, binding a child node model copy with the loaded target scene;
and S8, the handheld mobile equipment moves to a position in a real scene corresponding to the target scene, the mobile equipment detects that the scene matching is successful, loads and displays a virtual space represented by the target scene, synchronously displays a plurality of model copies bound with the virtual space, and selects the model copies to arrange the scene according to the requirement.
2. The method of claim 1, wherein the method comprises: in the step S3, a target scene in the scene arrangement and a root coordinate of the virtual world are obtained, and the specific steps are,
entering a scene arrangement mode, creating a virtual camera node for mapping the position of the virtual camera in the virtual world, wherein the coordinate of the virtual camera is the root coordinate (0, 0) of the virtual world, and the movement of the mobile equipment is mapped into the virtual camera of the virtual world;
calling a scene loading function of a space anchor component, configuring the ID of a target scene, loading cloud anchor data related to the space anchor component by the space anchor component, calling a camera of mobile equipment to continuously sample the surrounding environment, generating cloud anchors to be matched, and trying to match the cloud anchors of the target scene;
the mobile device which runs the AR application in a handheld mode moves to the position of a target scene in a real scene, the spatial anchor point component detects that the scene matching is successful, and sends a message that the scene matching is successful, wherein the message carries specific attribute information of the scene which is successfully matched, and the scene is the target scene.
3. A system for implementing the marker model binding based AR scene arrangement method of any one of claims 1 to 2, characterized in that: the system comprises mobile equipment, an augmented reality basic assembly and a scene arrangement module, wherein the scene arrangement module comprises a scene management assembly, a marker model binding management assembly and a scene arrangement assembly, the scene management assembly is used for scene acquisition and scene deletion, the marker model binding management assembly is used for adding and deleting binding relations, and the scene arrangement assembly has the functions of scene matching, marker identification, model display, model position storage and model display and model storage.
CN202211009755.9A 2022-08-23 2022-08-23 Marker model binding-based AR scene arrangement method and system Active CN115082648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211009755.9A CN115082648B (en) 2022-08-23 2022-08-23 Marker model binding-based AR scene arrangement method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211009755.9A CN115082648B (en) 2022-08-23 2022-08-23 Marker model binding-based AR scene arrangement method and system

Publications (2)

Publication Number Publication Date
CN115082648A CN115082648A (en) 2022-09-20
CN115082648B true CN115082648B (en) 2023-03-24

Family

ID=83244802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211009755.9A Active CN115082648B (en) 2022-08-23 2022-08-23 Marker model binding-based AR scene arrangement method and system

Country Status (1)

Country Link
CN (1) CN115082648B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115421626B (en) * 2022-11-02 2023-02-24 海看网络科技(山东)股份有限公司 AR virtual window interaction method based on mobile terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320333A (en) * 2017-12-29 2018-07-24 中国银联股份有限公司 The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9185402B2 (en) * 2013-04-23 2015-11-10 Xerox Corporation Traffic camera calibration update utilizing scene analysis
CN109377560A (en) * 2018-10-26 2019-02-22 北京理工大学 A kind of method of Outdoor Augmented Reality military simulation-based training
CN110335292B (en) * 2019-07-09 2021-04-30 北京猫眼视觉科技有限公司 Method, system and terminal for realizing simulation scene tracking based on picture tracking
CN110478901B (en) * 2019-08-19 2023-09-22 Oppo广东移动通信有限公司 Interaction method and system based on augmented reality equipment
CN110568934B (en) * 2019-10-18 2024-03-22 福州大学 Low-error high-efficiency multi-marker-diagram augmented reality system
CN111880649A (en) * 2020-06-24 2020-11-03 合肥安达创展科技股份有限公司 Demonstration method and system of AR viewing instrument and computer readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108320333A (en) * 2017-12-29 2018-07-24 中国银联股份有限公司 The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Hou-Ju Zhang ; Xiu-Ying Shi ; Jian-Jun Peng ; Ji-Ping Li ; Rong-Li G."Interaction between virtual scene based on Kinect and Unity3D".《IEEE》.2017, *
一种动态光照下视觉VSLAM中的场景特征匹配方法;张慧丽等;《电子设计工程》;20181220(第24期);第1-5页 *
基于模板跟踪的实时无标志点注册算法;王涌天等;《中国图象图形学报》;20080915(第09期);第1812-1819页 *
增强现实系统软件平台的设计与实现;倪晓等;《计算机工程与设计》;20090516(第09期);第2297-2300页 *

Also Published As

Publication number Publication date
CN115082648A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN107223269A (en) Three-dimensional scene positioning method and device
CN110765620B (en) Aircraft visual simulation method, system, server and storage medium
CN108648269A (en) The monomerization approach and system of three-dimensional building object model
US11030808B2 (en) Generating time-delayed augmented reality content
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
CN110059351A (en) Mapping method, device, terminal and the computer readable storage medium in house
CN113741698A (en) Method and equipment for determining and presenting target mark information
US20180204387A1 (en) Image generation device, image generation system, and image generation method
JP2016027480A (en) Information processing system, information processing apparatus, control method of the system, and program
CN106611438B (en) Local area updating and map cutting method and device of three-dimensional simulation map
CN115082648B (en) Marker model binding-based AR scene arrangement method and system
KR20210057943A (en) Method, apparatus and computer program for conducting automatic driving data labeling
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
Li et al. Outdoor augmented reality tracking using 3D city models and game engine
US11625900B2 (en) Broker for instancing
CN111127661B (en) Data processing method and device and electronic equipment
CN117078888A (en) Virtual character clothing generation method and device, medium and electronic equipment
CN115619990A (en) Three-dimensional situation display method and system based on virtual reality technology
US10921950B1 (en) Pointing and interaction control devices for large scale tabletop models
Merckel et al. Multi-interfaces approach to situated knowledge management for complex instruments: First step toward industrial deployment
WO2020067204A1 (en) Learning data creation method, machine learning model generation method, learning data creation device, and program
Kumar et al. Using flutter to develop a hybrid application of augmented reality
Hansen et al. Augmented Reality for Infrastructure Information: Challenges with information flow and interactions in outdoor environments especially on construction sites
Wang et al. Improving Construction Demonstrations by Integrating BIM, UAV, and VR
Giertsen et al. An open system for 3D visualisation and animation of geographic information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant