CN113610984B - Augmented reality method based on hollens 2 holographic glasses - Google Patents

Augmented reality method based on hollens 2 holographic glasses Download PDF

Info

Publication number
CN113610984B
CN113610984B CN202110666042.9A CN202110666042A CN113610984B CN 113610984 B CN113610984 B CN 113610984B CN 202110666042 A CN202110666042 A CN 202110666042A CN 113610984 B CN113610984 B CN 113610984B
Authority
CN
China
Prior art keywords
model
assembly
script
original
morphology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110666042.9A
Other languages
Chinese (zh)
Other versions
CN113610984A (en
Inventor
王勇
杜杰
杨海根
徐森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110666042.9A priority Critical patent/CN113610984B/en
Publication of CN113610984A publication Critical patent/CN113610984A/en
Application granted granted Critical
Publication of CN113610984B publication Critical patent/CN113610984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses an augmented reality method based on hollens 2 holographic glasses, which comprises the steps of firstly designing an original form model, a decomposed form model and an intelligent prompt form model after a development environment of hollens 2 is built, integrating and storing the three models, and then designing a decomposition effect, an intelligent assembly effect and an automatic visual field loss prompt effect of each script control model data set, wherein model position information of each form is stored in advance in models of different forms, so that model data are obtained in the design of a decomposed model script and an intelligent assembly script more quickly and efficiently, and the expansion and maintenance are easy. The present invention provides two different ways of interaction that enable a user to cope with more complex actual assembly environments. The method is applicable to the virtual assembly operation of the framework for the 3D model conforming to the standard format of the Unity3D engine.

Description

Augmented reality method based on hollens 2 holographic glasses
Technical Field
The invention relates to an augmented reality method based on Hollolens 2 holographic glasses, and belongs to the technical field of augmented reality application development.
Background
The assembly activity is large and complex equipment, is also an important link in the production process, and is high in labor training cost, long in period and low in efficiency due to the fact that certain assembly environments are narrow and are not beneficial to guidance of multiple persons, so that the traditional assembly activity is not efficient enough.
The augmented reality technology is a technology that accurately superimposes a computer-generated virtual image in real time in the real physical world, and a user can interactively interact with the computer-generated virtual object through a device medium. The appearance of the augmented reality technology can enable users to experience virtual and real interaction world, and the simulation behavior of the two-dimensional screen of the computer is transited to the three-dimensional space of the real world, so that the computer is more fit for the real operation environment of human beings. By modeling the assembly equipment and projecting the operation step information into the real space through the technology, the cost and time of personnel training can be reduced, and the overall quality of assembly can be improved.
In the devices for developing augmented reality applications, the main stream is divided into two types, namely a smart phone carrying a depth perception camera, and the mobile phone development augmented reality applications mainly comprise entertainment and leisure. The other type is the head-wearing intelligent glasses represented by Microsoft product Hollons 2, and the head-wearing intelligent glasses thoroughly release two hands, so that the two hands can simultaneously contact a virtual model generated by a computer and a model in real life, and the industrial assembly is served. The Hollolens 2 intelligent glasses are used as development hardware, the Unity3D engine is used as a development platform, and an augmented reality scene framework for developing and assembling applications is provided.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an augmented reality method based on Hollolens 2 holographic glasses.
In order to achieve the above object, the present invention provides an augmented reality method based on hollens 2 holographic glasses, comprising:
constructing a 3D model to obtain an original morphology model, a decomposed morphology model and an intelligent prompt morphology model;
integrating the original morphology model, the decomposed morphology model and the intelligent prompt morphology model into an integrated model;
the user wears hollens 2 holographic glasses;
user scaling integration model: enabling an enable button of a 4x1 button menu or enabling a script binding Box to be started by using a language instruction so that a Bounding Box appears around the integrated model, and enabling a user to grasp four corners of the Bounding Box to drag outwards or inwards through gestures to realize a zooming function;
when the scaling is finished, using a disable button of a 4x1 button menu or using a voice instruction disable to close a script boundingBox so as to hide a boundary box around the integrated model;
the user decomposes the original form model to obtain a decomposed form model, namely, using an expode button of a 4x1 button menu or using a language instruction expode to trigger a function ToggleView in a script Breakdown View Controller, and automatically moving an assembly module of the original form model to an assembly module position of the decomposed form model;
the user restores the original form model by using an expode button of a 4x1 button menu or using a language instruction expode to trigger a function Toggle View in a script Breakdown View Controller, and an assembly module of the original form model automatically moves back to an original position of an assembly module of the original form model from an assembly module position of the decomposition form model;
when the PartAssemblem Handler script detects that the distance between an assembly module grabbed by a user in the integration model and an assembly model of a bright material of the intelligent prompt form model is more than 0.001cm and less than 0.1cm, triggering a bonding event; assigning world coordinates of the assembly modules of the bright material to world coordinates of the assembly modules of the grasped integrated model by the bonding event, so that the two assembly modules are overlapped;
when the DirectionHandler script detects that rays emitted by the camera do not interact with the fit collision device or a user clicks the intelligent prompt button, the loss direction of the user is determined, the SloverHandler script obtains the rotation direction of the head, sets the arrow to point to the opposite direction of the rotation of the head, and activates the display of the arrow preform.
Preferentially, constructing the 3D model comprises:
setting up an environment for developing hollens 2 holographic glasses in the Unity3D project, converting 3D model resources into a format conforming to a Unity3D engine, and introducing the format into the Unity3D project;
classifying model components of the 3D model resources by a Unity3D editor, and designing 3D models with different forms;
integrating the 3D models with different forms and putting the 3D models into an assembly scene;
designing scripts to control the behavior of each model component;
different input instructions are set to manipulate the 3D model.
Preferably, setting up an environment for developing hollens 2 holographic glasses in the Unity3D project, converting 3D model resources into a format conforming to the Unity3D engine, and importing the format into the Unity3D project, wherein the method comprises the following steps:
step 101: creating a new Unity3D project, and creating a new scene as an assembly scene; downloading a UWP platform supporting development of hollens 2 holographic glasses in the Unity3D project; downloading an MRTK toolkit in a Github Microsoft warehouse, and importing the MRTK toolkit into an Assembly Project item;
switching a development platform into a UWP platform in an Assembly Project, and storing an Assembly scene;
step 102: the 3D model resource to be imported into the Unity3D project is converted into an arbitrary format supported by the Unity3D engine, and the 3D model resource is added in the resource manager of the Unity3D project.
Preferably, the Unity3D editor classifies model components of 3D model resources and designs 3D models of different modalities, including:
step 201: double-clicking in a resource manager of the Unity3D project opens imported 3D model resources, and classifies the 3D model resources into assembly modules;
in order to ensure that world coordinate values of an original morphological model are not disturbed in the classifying process, creating empty objects as parent class objects, taking each classified assembly module as a child class of the parent class objects, and setting names of the parent class objects as classified names;
after classification is completed, packaging and storing each assembly module, parent class object and subclass as an original form model;
copying the original morphological model into two copies and storing the copies in a resource manager;
step 202: opening a first copied original morphology model in a resource manager, and naming the original morphology model as a decomposition morphology model;
dragging each classified parent object to a designated Position in a window editor, or selecting each parent object and setting the value of a variable Position corresponding to each parent object on a Transfrom component mounted in an aspect panel;
step 203: opening a second copied original morphology model in the resource manager, and naming the original morphology model as an intelligent prompt morphology model;
determining all assembly modules capable of being assembled in the intelligent prompt form model, and copying and independently storing all the assembly modules capable of being assembled;
and replacing the materials of the assembly modules by using materials with the brightness higher than the set brightness threshold for the assembly modules which are stored separately.
Preferably, integrating and placing 3D models of different modalities into an assembly scene includes:
step 204: creating a new prefabricated body in a resource manager, opening the empty prefabricated body for editing, dragging an original form model, a decomposed form model and an intelligent prompt form model into a Hierarchy panel, and setting Position variables of all assembly modules in the original form model, the decomposed form model and the intelligent prompt form model in a Transform component to be consistent;
hiding all the assembly modules of the original morphological model, hiding all the assembly modules of the decomposed morphological model, and displaying all the assembly modules of the intelligent prompt morphological model;
the original morphology model, the decomposed morphology model and the intelligent prompt morphology model are stored to obtain an integrated model;
step 205: placing the newly-built empty Object Game Object into the assembly scene saved in the step 101;
adding the integrated model to the assembly scene in the resource manager, setting an empty Object Game Object as a parent class of the integrated model, adding the assembly model copied in the step 203 to the assembly scene, and setting the empty Object GameObject as a parent class copied to obtain the assembly model.
Preferably, designing the script to control the behavior of the individual model components includes:
step 301: the method comprises the steps of using an ObjectManager script in an MRTK toolkit to realize movement and rotation of an original morphology model, a decomposed morphology model and an intelligent prompt morphology model, and using a BoundingBox script in the MRTK toolkit to realize scaling of the original morphology model, the decomposed morphology model and the intelligent prompt morphology model;
step 302: creating a BreakDown ViewControl script, writing a function ToggleView in the script, setting variables to acquire world coordinates of all assembly modules of an original form model and world coordinates of all assembly modules of a decomposition form model, setting Boolean variables IsInOriginalPos to enable the function ToggleView to control decomposition of all assembly modules in the decomposition form model and control recovery of all assembly modules in the decomposition form model, and when the InOriginalPos value is true, controlling decomposition of all assembly modules in the decomposition form model by the ToggleView function and setting InOriginalPos as false; when InOriginalPos value is false, the Toggle View function controls the recovery of each assembly module in the decomposition morphological model and sets InOriginalPos as true; the BreakDown ViewControl script is mounted on the integration model;
step 303: newly creating a PartAssemblem Handler script, wherein the script acquires world coordinates of an assembly module with a bright material arranged in an intelligent prompt form model and world coordinates of an assembly module in a decomposition form model, and triggers a bonding event when the distance between the assembly module grasped by a user in the set integration model and the assembly model with the bright material of the intelligent prompt form model is more than 0.001cm and less than 0.1 cm;
a function ToggleHint is written in the PartAssemblem Handler script and is used for triggering the display and the hiding of the assembly module made of the bright material;
step 304: creating a direct handler script, mounting the direct handler script and a SloverHandler script of an MRTK toolkit on a scene camera, integrating a model mounting box body collider, setting a tracking target type as a head, manufacturing a bright-color arrow prefabricated body, hiding the bright-color arrow prefabricated body and putting the bright-color arrow prefabricated body into an assembly scene, and detecting rays by the direct handler script;
when the ray does not collide with the box body collider for interaction, the SloverHandler script solves the direction and activates the bright arrow to point to the direction of the integrated model in the scene.
Preferably, setting different input instructions to manipulate the 3D model includes:
step 401: on the basis of step 301, adding buttons for enhancing the interactive experience of a user and a scene, wherein the buttons comprise an enable button for triggering a boundingBox script, a disable button for closing the boundingBox script, a recovery button for triggering a BreakdownViewControl script, an adhesive button for triggering a PartAssemblelyHandler, an explicit button for decomposing an original morphology model to obtain a decomposed morphology model and an intelligent prompt button for triggering a DirectionHandler script and a SloverHandler script;
step 402: calling a voice input trigger function in the MRTK toolkit, and setting four buttons in the step 401 to be voice trigger;
setting 403: newly creating a ToolTipSpanner script and mounting the ToolTipSpanner script on each assembly module of the GameObject in the step 205, wherein the ToolTipSpanner script inherits a Base Focus Handler interface and a IMixed Reality Input Handler interface in an MRTK toolkit, and the two interfaces realize that corresponding text information comprising names and sizes is automatically displayed when a user looks at a certain model;
the scene was generated as a sln file and imported into hollens 2 holographic glasses.
The invention has the beneficial effects that:
the invention is applicable to the 3D model which is in any form and accords with the standard format of the Unity3D engine, and is used for carrying out virtual assembly operation and realizing functions of the framework. The invention provides a universal scene frame for developing the augmented reality assembly application, provides convenience for later developers, and helps the developers to quickly build the basic augmented reality assembly scene application.
Drawings
FIG. 1 is a diagram of the overall architecture of the process of the present invention;
FIG. 2 is a diagram of a method of making an assembly model dataset;
FIG. 3 is an assembly model exploded script flow diagram;
fig. 4 is a view of an assembly scene frame structure.
Detailed Description
The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, if there is a directional indication (such as up, down, left, right, front, rear.
An augmented reality method based on hollens 2 holographic glasses, comprising:
constructing a 3D model to obtain an original morphology model, a decomposed morphology model and an intelligent prompt morphology model;
integrating the original morphology model, the decomposed morphology model and the intelligent prompt morphology model into an integrated model;
the user wears hollens 2 holographic glasses;
user scaling integration model: enabling an enable button of a 4x1 button menu or enabling a script binding Box to be started by using a language instruction so that a Bounding Box appears around the integrated model, and enabling a user to grasp four corners of the Bounding Box to drag outwards or inwards through gestures to realize a zooming function;
when the scaling is finished, using a disable button of a 4x1 button menu or using a voice instruction disable to close a script boundingBox so as to hide a boundary box around the integrated model;
the user decomposes the original form model to obtain a decomposed form model, namely, using an expode button of a 4x1 button menu or using a language instruction expode to trigger a function ToggleView in a script Breakdown View Controller, and automatically moving an assembly module of the original form model to an assembly module position of the decomposed form model;
the user restores the original form model by using an expode button of a 4x1 button menu or using a language instruction expode to trigger a function Toggle View in a script Breakdown View Controller, and an assembly module of the original form model automatically moves back to an original position of an assembly module of the original form model from an assembly module position of the decomposition form model;
when the PartAssemblem Handler script detects that the distance between an assembly module grabbed by a user in the integration model and an assembly model of a bright material of the intelligent prompt form model is more than 0.001cm and less than 0.1cm, triggering a bonding event; assigning world coordinates of the assembly modules of the bright material to world coordinates of the assembly modules of the grasped integrated model by the bonding event, so that the two assembly modules are overlapped;
when the DirectionHandler script detects that rays emitted by the camera do not interact with the fit collision device or a user clicks the intelligent prompt button, the loss direction of the user is determined, the SloverHandler script obtains the rotation direction of the head, sets the arrow to point to the opposite direction of the rotation of the head, and activates the display of the arrow preform.
Preferentially, constructing the 3D model comprises:
setting up an environment for developing hollens 2 holographic glasses in the Unity3D project, converting 3D model resources into a format conforming to a Unity3D engine, and introducing the format into the Unity3D project;
classifying model components of the 3D model resources by a Unity3D editor, and designing 3D models with different forms;
integrating the 3D models with different forms and putting the 3D models into an assembly scene;
designing scripts to control the behavior of each model component;
different input instructions are set to manipulate the 3D model.
Preferably, setting up an environment for developing hollens 2 holographic glasses in the Unity3D project, converting 3D model resources into a format conforming to the Unity3D engine, and importing the format into the Unity3D project, wherein the method comprises the following steps:
step 101: new Unity3D Project, named Assemble Project, download the platform support (UWP platform) supporting development of Hollolens 2 in Unity3D Project, and download MRTK toolkit in Github Microsoft warehouse and import into Assemble Project. And switching the development platform into a UWP platform in an Assembly Project setting, and configuring an MRTK file. The UI element of MRTK needs to use the textMeshPro basic resource, in the menu of the Unity item, "Window" > "textMeshPro" > "import TMP basic resource" is selected to open the "import Unity package" Window, and the "All" button is clicked in the "import Unity package" Window to ensure that All resources are selected, and then the "import" button is clicked to import the asset. Next, select "edit" > "Project Setting" in the Unity3D Project, open "Project Setting" window, select "XR plug-in management" > "install XR plug-in management" in "Project Setting" window, install XR plug-in, after the Unity3D Project installs XR plug-in management, ensure to go to "UWP" Setting, then check "initialize XR at start-up" and "Windows Mixed Reality" check boxes. After the Unity3D project is imported Windows Mixed Reality SDK, the project displays an "MRTK project configurator" window in which the "audio space locator" drop down list is used to select MS HRTF Spatializer, and then the "apply" button is clicked to apply the setting. And after the steps are set, saving the scene.
Step 102: the 3D model to be imported into the Unity3D project is converted into any format supported by the Unity3D engine, and the 3D model conforming to the format required by the Unity3D engine is added in the resource manager of the Unity3D project.
Preferably, as shown in fig. 2, the Unity3D editor classifies model components of 3D model resources and designs 3D models of different modalities, including:
step 201: double-clicking in a resource manager of the Unity3D project opens imported 3D model resources, and classifies the 3D model resources into assembly modules;
in order to ensure that world coordinate values of an original morphological model are not disturbed in the classifying process, creating empty objects as parent class objects, taking each classified assembly module as a child class of the parent class objects, and setting names of the parent class objects as classified names;
after classification is completed, packaging and storing each assembly module, parent class object and subclass as an original form model;
copying the original morphological model into two copies and storing the copies in a resource manager;
step 202: opening a first copied original morphology model in a resource manager, and naming the original morphology model as a decomposition morphology model;
dragging each classified parent object to a designated Position in a window editor, or selecting each parent object and setting the value of a variable Position corresponding to each parent object on a Transfrom component mounted in an aspect panel;
step 203: opening a second copied original morphology model in the resource manager, and naming the original morphology model as an intelligent prompt morphology model;
determining all assembly modules capable of being assembled in the intelligent prompt form model, and copying and independently storing all the assembly modules capable of being assembled;
and replacing the materials of the assembly modules by using materials with the brightness higher than the set brightness threshold for the assembly modules which are stored separately.
Preferably, integrating and placing 3D models of different modalities into an assembly scene includes:
step 204: creating a new prefabricated body in a resource manager, opening the empty prefabricated body for editing, dragging an original form model, a decomposed form model and an intelligent prompt form model into a Hierarchy panel, and setting Position variables of all assembly modules in the original form model, the decomposed form model and the intelligent prompt form model in a Transform component to be consistent;
hiding all the assembly modules of the original morphological model, hiding all the assembly modules of the decomposed morphological model, and displaying all the assembly modules of the intelligent prompt morphological model;
the original morphology model, the decomposed morphology model and the intelligent prompt morphology model are stored to obtain an integrated model;
step 205: placing the newly-built empty Object Game Object into the assembly scene saved in the step 101;
adding the integrated model to the assembly scene in the resource manager, setting an empty Object Game Object as a parent class of the integrated model, adding the assembly model copied in the step 203 to the assembly scene, and setting the empty Object GameObject as a parent class copied to obtain the assembly model.
Preferably, designing the script to control the behavior of the individual model components includes:
step 301: the method comprises the steps of using an ObjectManager script in an MRTK toolkit to realize movement and rotation of an original morphology model, a decomposed morphology model and an intelligent prompt morphology model, and using a BoundingBox script in the MRTK toolkit to realize scaling of the original morphology model, the decomposed morphology model and the intelligent prompt morphology model;
step 302: as shown in fig. 3, newly creating a breakdown View control script, writing a function ToggleView in the script, setting variables to acquire world coordinates of each assembly module of an original morphology model and world coordinates of each assembly module of a decomposition morphology model, setting boolean variables isiinoigeralpos to enable the function ToggleView to control decomposition of each assembly module in the decomposition morphology model and control recovery of each assembly module in the decomposition morphology model, and when the value of the inoigeralpos is true, controlling decomposition of each assembly module in the decomposition morphology model by the ToggleView function and setting inoigeralpos as false; when InOriginalPos value is false, the Toggle View function controls the recovery of each assembly module in the decomposition morphological model and sets InOriginalPos as true; the BreakDown ViewControl script is mounted on the integration model;
step 303: newly creating a PartAssemblem Handler script, wherein the script acquires world coordinates of an assembly module with a bright material arranged in an intelligent prompt form model and world coordinates of an assembly module in a decomposition form model, and triggers a bonding event when the distance between the assembly module grasped by a user in the set integration model and the assembly model with the bright material of the intelligent prompt form model is more than 0.001cm and less than 0.1 cm;
a function ToggleHint is written in the PartAssemblem Handler script and is used for triggering the display and the hiding of the assembly module made of the bright material;
step 304: creating a direct handler script, mounting the direct handler script and a SloverHandler script of an MRTK toolkit on a scene camera, integrating a model mounting box body collider, setting a tracking target type as a head, manufacturing a bright-color arrow prefabricated body, hiding the bright-color arrow prefabricated body and putting the bright-color arrow prefabricated body into an assembly scene, and detecting rays by the direct handler script;
when the ray does not collide with the box body collider for interaction, the SloverHandler script solves the direction and activates the bright arrow to point to the direction of the integrated model in the scene.
Preferably, setting different input instructions to manipulate the 3D model includes:
step 401: on the basis of step 301, adding buttons for enhancing the interactive experience of a user and a scene, wherein the buttons comprise an enable button for triggering a boundingBox script, a disable button for closing the boundingBox script, a recovery button for triggering a BreakdownViewControl script, an adhesive button for triggering a PartAssemblelyHandler, an explicit button for decomposing an original morphology model to obtain a decomposed morphology model and an intelligent prompt button for triggering a DirectionHandler script and a SloverHandler script;
step 402: calling a voice input trigger function in the MRTK toolkit, and setting four buttons in the step 401 to be voice trigger;
setting 403: newly creating a ToolTipSpanner script and mounting the ToolTipSpanner script on each assembly module of the GameObject in the step 205, wherein the ToolTipSpanner script inherits a Base Focus Handler interface and a IMixed Reality Input Handler interface in an MRTK toolkit, and the two interfaces realize that corresponding text information comprising names and sizes is automatically displayed when a user looks at a certain model;
the scene was generated as a sln file and imported into hollens 2 holographic glasses.
Hollolens 2 holographic glasses can be used in a plurality of models in the prior art, and a person skilled in the art can select a proper model according to actual requirements, and the embodiment is not exemplified one by one.
The running flow of the script Breakdown View Controller is as follows:
1. before the scene is operated, the assembly model of the original model form in the step 201 is stored in an array defaultPositions, and the assembly model of the model form decomposed in the step 202 is stored in an array expode positions.
2. After the scene is operated, firstly, a transform.location function of the Unity3D engine is called to store world coordinates of the assembly model stored in the defaultPos array and the expode Pos array respectively.
3. And (4) invoking an Update function of the Unity3D engine, detecting whether a decomposition event is triggered or not by each frame, wherein the decomposition event is triggered by the function trigger View as an external interface through gesture interaction in step 401 and voice interaction in step 402, and referring to FIG. 4.
4. If the decomposition event is triggered, detecting whether the Boolean value isindafaultposition is true, and if true, jumping to the step 5.
5. The original assembly model is moved to a position for decomposing the assembly model, namely, world coordinates stored in an expode pos array are assigned to the world coordinates of the original assembly model. After the execution is finished, continuing to jump to the step 3.
6. If the decomposition event is triggered, the boolean value isInDefaultPosition is detected as false, and the process goes to step 7.
7. Reassigning the world coordinates of the moved original assembly model to world coordinates stored in the defaultPos array, and continuing to jump to the step 3 after the execution is finished.
8. If the decomposition event is not triggered, the step 3 is repeated continuously.
The resolution control script is installed on the integrated model of step 204, see figure 4,Breakdown View Controller.
The script Part Assembly Handler comprises the following implementation processes:
1. after the scene is operated, each frame of the Update function is called to obtain world coordinates a of the bright material assembly model and world coordinates b of the grabbed copy assembly model.
2. And calculating the positions of the world coordinates of the two models, calling vector3.Distance (a, b) in the Unity3D engine, and comparing the calculated result with a preset range.
3. If the vector3.Distance (a, b) is between 0.001cm and 0.1cm, assigning the world coordinates of the assembly model of the bright material to the world coordinates of the captured duplicate assembly model so that the two models overlap, thus visually realizing the process of adhering the captured duplicate assembly model to the original morphology model.
4. If the result of vector3.Distance (a, b) does not reach the set range, steps 1 and 2 are repeated continuously until the condition is satisfied.
The script also implements a function Toggle Hint for triggering the display and hiding of the assembly model of the bright material, triggered by the gesture interaction of step 401 and the voice interaction call of step 402, see FIG. 4.
Step 304 specifically includes:
the newly created script Direction Handler is installed on the scene camera using the radiation detection function. Using the ray detection functional model, a box body collider must be mounted, and the parent empty Object Game Object in step 205 is added to the box body collider, and the implementation of Direction Handler script directional emission rays is as follows: using Ray class Ray in Unity3D engine, creating a Ray, needs to specify the origin (origin) of the Ray and the direction (direction) of the Ray, setting the scene camera as the origin of the Ray, and setting the camera 0 degree angle, i.e. the direction of the Ray directly in front of the camera. Then using a Slver Handler script of the MRTK toolkit, the Slver Handler script also needs to be mounted on a scene camera, and the tracking target type is set to be a head, so that the field of view of the camera and the rays emitted by the camera are bound with the head, and the directions of the fields of view and rays are changed along with the rotation of the head. When the head rotates and the ray does not collide with the box body collider of the parent empty Object Game Object in step 205, namely when the user loses the view of the scene, the server Handler script obtains the rotation direction of the head to automatically solve, sets the arrow to point to the opposite direction of the rotation of the head, and activates the display of the arrow preform.
Step 401: gesture interactions are set. In step 301, the Object Manipulator script and the Bound Box script have realized basic gesture interaction functions, in order to further enhance the gesture interaction functions, a 4x1 button menu in the MRTK toolkit is used in an assembly scene, and four buttons are included to respectively control the switch of the Bound Box script in step 301, the Breakdown View Controller script in step 302 and the Part Assembly Handler script in step 303, and the four buttons respectively realize:
1. the scene bounding Box is opened, the starting Box script in step 301 is called by the triggering click event of the button, and when the button is clicked, the bounding Box can appear in the whole scene, and the enlargement and the reduction of the whole scene can be controlled through gesture interaction, so that the method is suitable for spaces with different sizes.
2. Closing the scene bounding box, clicking the button while adjusting the scene to the appropriate size and position may close the scene bounding box.
3. The break-up control button, which controls the break-up visual effect of the integrated model in step 204, the click trigger event of this button invokes the Toggle View function in the Breakdown View Controller script in step 302. The switch is a two-way switch, and can be recovered by clicking again after the model is decomposed.
4. The switch button is prompted to control the display and hiding of the assembly model made of the bright material on the integration model in step 204, and when the operator is unfamiliar with the assembly position, the operator can open the assembly model through visual prompt, and after the operator is familiar with the process, the operator can close the assembly model to review.
Step 402: and setting voice interaction. The voice input trigger function in the MRTK is invoked, converting the implementation functions of all buttons in step 401 into voice triggers. The implementation process is as follows: firstly, ensuring correct setting of a development scene in the step 101, then finding an Input option in MRTK configuration settings in a Unity3D project, clicking to find a sub-item specific list, clicking a 'Add a new Speech Command' button to add four voice instructions, and respectively corresponding to the four buttons in the step 401; after the setting is finished, adding an empty object in an assembly scene and naming the empty object as 'SpecechInputHandler_Global', mounting a script SpecechInputHandler self-contained in an MRTK toolkit to the 'SpecechInputHandler', and adding four voice trigger events in the script 'SpecechInputHandler' by clicking 'Add Component', wherein the four voice trigger events are required to be consistent with voice instructions set in a Spech list in MRTK configuration; the trigger function for each voice command event is set to correspond to the function called when the four button clicks are triggered in step 401.
Step 403: a gaze interaction is set which enables a text box to pop up when the user gazes at a certain assembly model, on which basic information of the model is displayed. In the development of augmented reality application by using a Unity3D engine, the head position and direction of a user are replaced by a scene camera, a RaycastHit class in the Unity3D engine, namely a ray detection function, is called, and when the interaction time of rays emitted by the camera and a collision body exceeds one second, a text box is automatically popped up, and when the rays leave the collision body, the text box automatically disappears. The cartridge collider must be mounted on the model for this function.
The implementation of gaze interaction is:
1. and mounting a box body collision device on the model needing staring interaction, and newly creating a script Tool Tip spaner and mounting on the model needing staring interaction. The script stores the text information in advance and is set to be hidden, and the script is set to be displayed when the staring event is triggered.
2. Using Ray class Ray in Unity3D engine, instantiate a Ray object, set the scene run to launch a Ray from the camera at the beginning, set the Ray direction to 0 degrees, i.e. directly in front of the head.
3. A RaycastHit variable is defined to hold information about the object being bumped.
4. If the Ray is blocked by a model with a box-type collider, the raycast hit variable automatically stores the collision information of the Ray with the model, including the collision time t and the position p of the collision point.
5. Judging whether the collision time t is greater than or equal to 1s, if so, triggering an event for displaying the text, and if the collision time t is too short, not displaying the text.
After the assembly scene is designed, a 'Build Settings' window is opened, and a 'Build' button is clicked to construct the project. After the construction is finished, opening a file of sln in a folder, connecting Hololens2 glasses with a computer through a usb connecting wire, configuring a file of sln of Visual Studio for HoloLens by selecting a Release configuration, an ARM64 architecture and a device as targets, selecting a button of start but not debug in Visual Studio2019, and clicking to import a Unity3D item into the Hololens2 glasses. After the introduction, an application is started in hollens 2 glasses, and the application will display the basic scene built in step 205 at the beginning, including an integrated model, a model to be assembled and a 4x1 button menu in step 401. The user can assemble the model through gesture interaction and the bright color prompt, realize different functions through the voice command, and acquire model information through staring interaction.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (7)

1. An augmented reality method based on hollens 2 holographic glasses is characterized by comprising the following steps:
constructing a 3D model to obtain an original morphology model, a decomposed morphology model and an intelligent prompt morphology model;
integrating the original morphology model, the decomposed morphology model and the intelligent prompt morphology model into an integrated model;
the user wears hollens 2 holographic glasses;
user scaling integration model: enabling an enable button of a 4x1 button menu or enabling a script binding Box to be started by using a language instruction so that a Bounding Box appears around the integrated model, and enabling a user to grasp four corners of the Bounding Box to drag outwards or inwards through gestures to realize a zooming function;
when the scaling is finished, using a disable button of a 4x1 button menu or using a voice instruction disable to close a script boundingBox so as to hide a boundary box around the integrated model;
the user decomposes the original form model to obtain a decomposed form model, namely, using an expode button of a 4x1 button menu or using a language instruction expode to trigger a function ToggleView in a script Breakdown View Controller, and automatically moving an assembly module of the original form model to an assembly module position of the decomposed form model;
the user restores the original form model by using an expode button of a 4x1 button menu or using a language instruction expode to trigger a function Toggle View in a script Breakdown View Controller, and an assembly module of the original form model automatically moves back to an original position of an assembly module of the original form model from an assembly module position of the decomposition form model;
when the PartAssemblem Handler script detects that the distance between an assembly module grabbed by a user in the integration model and an assembly model of a bright material of the intelligent prompt form model is more than 0.001cm and less than 0.1cm, triggering a bonding event; assigning world coordinates of the assembly modules of the bright material to world coordinates of the assembly modules of the grasped integrated model by the bonding event, so that the two assembly modules are overlapped;
when the DirectionHandler script detects that rays emitted by the camera do not interact with the fit collision device or a user clicks the intelligent prompt button, the loss direction of the user is determined, the SloverHandler script obtains the rotation direction of the head, sets the arrow to point to the opposite direction of the rotation of the head, and activates the display of the arrow preform.
2. The augmented reality method based on hollens 2 holographic glasses according to claim 1, wherein,
constructing a 3D model, comprising:
setting up an environment for developing hollens 2 holographic glasses in the Unity3D project, converting 3D model resources into a format conforming to a Unity3D engine, and introducing the format into the Unity3D project;
classifying model components of the 3D model resources by a Unity3D editor, and designing 3D models with different forms;
integrating the 3D models with different forms and putting the 3D models into an assembly scene;
designing scripts to control the behavior of each model component;
different input instructions are set to manipulate the 3D model.
3. The augmented reality method based on hollens 2 holographic glasses according to claim 1, wherein building an environment for developing hollens 2 holographic glasses in a Unity3D project, converting 3D model resources into a format conforming to a Unity3D engine, and importing the format into the Unity3D project comprises:
step 101: creating a new Unity3D project, and creating a new scene as an assembly scene; downloading a UWP platform supporting development of hollens 2 holographic glasses in the Unity3D project; downloading an MRTK toolkit in a Github Microsoft warehouse, and importing the MRTK toolkit into an Assembly Project item;
switching a development platform into a UWP platform in an Assembly Project, and storing an Assembly scene;
step 102: the 3D model resource to be imported into the Unity3D project is converted into an arbitrary format supported by the Unity3D engine, and the 3D model resource is added in the resource manager of the Unity3D project.
4. The augmented reality method based on hollens 2 holographic glasses according to claim 1, wherein,
the Unity3D editor classifies model components of 3D model resources and designs 3D models of different morphologies, including:
step 201: double-clicking in a resource manager of the Unity3D project opens imported 3D model resources, and classifies the 3D model resources into assembly modules;
in order to ensure that world coordinate values of an original morphological model are not disturbed in the classifying process, creating empty objects as parent class objects, taking each classified assembly module as a child class of the parent class objects, and setting names of the parent class objects as classified names;
after classification is completed, packaging and storing each assembly module, parent class object and subclass as an original form model;
copying the original morphological model into two copies and storing the copies in a resource manager;
step 202: opening a first copied original morphology model in a resource manager, and naming the original morphology model as a decomposition morphology model;
dragging each classified parent object to a designated Position in a window editor, or selecting each parent object and setting the value of a variable Position corresponding to each parent object on a Transfrom component mounted in an aspect panel;
step 203: opening a second copied original morphology model in the resource manager, and naming the original morphology model as an intelligent prompt morphology model;
determining all assembly modules capable of being assembled in the intelligent prompt form model, and copying and independently storing all the assembly modules capable of being assembled;
and replacing the materials of the assembly modules by using materials with the brightness higher than the set brightness threshold for the assembly modules which are stored separately.
5. The augmented reality method based on hollens 2 holographic glasses according to claim 4, wherein,
integrating and placing 3D models with different forms into an assembly scene, wherein the method comprises the following steps:
step 204: creating a new prefabricated body in a resource manager, opening the empty prefabricated body for editing, dragging an original form model, a decomposed form model and an intelligent prompt form model into a Hierarchy panel, and setting Position variables of all assembly modules in the original form model, the decomposed form model and the intelligent prompt form model in a Transform component to be consistent;
hiding all the assembly modules of the original morphological model, hiding all the assembly modules of the decomposed morphological model, and displaying all the assembly modules of the intelligent prompt morphological model;
the original morphology model, the decomposed morphology model and the intelligent prompt morphology model are stored to obtain an integrated model;
step 205: placing the newly-built empty Object Game Object into the assembly scene saved in the step 101;
adding the integrated model to the assembly scene in the resource manager, setting an empty Object Game Object as a parent class of the integrated model, adding the assembly model copied in the step 203 to the assembly scene, and setting the empty Object GameObject as a parent class copied to obtain the assembly model.
6. The augmented reality method based on hollens 2 holographic glasses according to claim 1, wherein,
designing scripts to control the behavior of the various model components includes:
step 301: the method comprises the steps of using an ObjectManager script in an MRTK toolkit to realize movement and rotation of an original morphology model, a decomposed morphology model and an intelligent prompt morphology model, and using a BoundingBox script in the MRTK toolkit to realize scaling of the original morphology model, the decomposed morphology model and the intelligent prompt morphology model;
step 302: creating a BreakDown ViewControl script, writing a function ToggleView in the script, setting variables to acquire world coordinates of all assembly modules of an original form model and world coordinates of all assembly modules of a decomposition form model, setting Boolean variables IsInOriginalPos to enable the function ToggleView to control decomposition of all assembly modules in the decomposition form model and control recovery of all assembly modules in the decomposition form model, and when the InOriginalPos value is true, controlling decomposition of all assembly modules in the decomposition form model by the ToggleView function and setting InOriginalPos as false; when InOriginalPos value is false, the Toggle View function controls the recovery of each assembly module in the decomposition morphological model and sets InOriginalPos as true; the BreakDown ViewControl script is mounted on the integration model;
step 303: newly creating a PartAssemblem Handler script, wherein the script acquires world coordinates of an assembly module with a bright material arranged in an intelligent prompt form model and world coordinates of an assembly module in a decomposition form model, and triggers a bonding event when the distance between the assembly module grasped by a user in the set integration model and the assembly model with the bright material of the intelligent prompt form model is more than 0.001cm and less than 0.1 cm;
a function ToggleHint is written in the PartAssemblem Handler script and is used for triggering the display and the hiding of the assembly module made of the bright material;
step 304: creating a direct handler script, mounting the direct handler script and a SloverHandler script of an MRTK toolkit on a scene camera, integrating a model mounting box body collider, setting a tracking target type as a head, manufacturing a bright-color arrow prefabricated body, hiding the bright-color arrow prefabricated body and putting the bright-color arrow prefabricated body into an assembly scene, and detecting rays by the direct handler script;
when the ray does not collide with the box body collider for interaction, the SloverHandler script solves the direction and activates the bright arrow to point to the direction of the integrated model in the scene.
7. The augmented reality method based on hollens 2 holographic glasses according to claim 6, wherein,
setting different input instructions to manipulate the 3D model, comprising:
step 401: on the basis of step 301, adding buttons for enhancing the interactive experience of a user and a scene, wherein the buttons comprise an enable button for triggering a boundingBox script, a disable button for closing the boundingBox script, a recovery button for triggering a BreakdownViewControl script, an adhesive button for triggering a PartAssemblelyHandler, an explicit button for decomposing an original morphology model to obtain a decomposed morphology model and an intelligent prompt button for triggering a DirectionHandler script and a SloverHandler script;
step 402: calling a voice input trigger function in the MRTK toolkit, and setting four buttons in the step 401 to be voice trigger;
setting 403: newly creating a ToolTipSpanner script and mounting the ToolTipSpanner script on each assembly module of the GameObject in the step 205, wherein the ToolTipSpanner script inherits a Base Focus Handler interface and a IMixed Reality Input Handler interface in an MRTK toolkit, and the two interfaces realize that corresponding text information comprising names and sizes is automatically displayed when a user looks at a certain model;
the scene was generated as a sln file and imported into hollens 2 holographic glasses.
CN202110666042.9A 2021-06-16 2021-06-16 Augmented reality method based on hollens 2 holographic glasses Active CN113610984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110666042.9A CN113610984B (en) 2021-06-16 2021-06-16 Augmented reality method based on hollens 2 holographic glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110666042.9A CN113610984B (en) 2021-06-16 2021-06-16 Augmented reality method based on hollens 2 holographic glasses

Publications (2)

Publication Number Publication Date
CN113610984A CN113610984A (en) 2021-11-05
CN113610984B true CN113610984B (en) 2023-06-16

Family

ID=78303518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110666042.9A Active CN113610984B (en) 2021-06-16 2021-06-16 Augmented reality method based on hollens 2 holographic glasses

Country Status (1)

Country Link
CN (1) CN113610984B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116459009A (en) * 2023-05-15 2023-07-21 德智鸿(上海)机器人有限责任公司 Semi-automatic registration method and device for augmented reality navigation system
CN117132624B (en) * 2023-10-27 2024-01-30 济南作为科技有限公司 Method, device, equipment and storage medium for detecting occlusion of following camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846442A (en) * 2017-03-06 2017-06-13 西安电子科技大学 Three-dimensional crowd's scene generating method based on Unity3D
US10732721B1 (en) * 2015-02-28 2020-08-04 sigmund lindsay clements Mixed reality glasses used to operate a device touch freely

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019001360A1 (en) * 2017-06-29 2019-01-03 华南理工大学 Human-machine interaction method based on visual stimulations
US20190253700A1 (en) * 2018-02-15 2019-08-15 Tobii Ab Systems and methods for calibrating image sensors in wearable apparatuses
US11250641B2 (en) * 2019-02-08 2022-02-15 Dassault Systemes Solidworks Corporation System and methods for mating virtual objects to real-world environments

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10732721B1 (en) * 2015-02-28 2020-08-04 sigmund lindsay clements Mixed reality glasses used to operate a device touch freely
CN106846442A (en) * 2017-03-06 2017-06-13 西安电子科技大学 Three-dimensional crowd's scene generating method based on Unity3D

Also Published As

Publication number Publication date
CN113610984A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US10534605B2 (en) Application system having a gaming engine that enables execution of a declarative language
US9508179B2 (en) Flexible 3-D character rigging development architecture
König et al. Interactive design of multimodal user interfaces: reducing technical and visual complexity
US11425220B2 (en) Methods, systems, and computer program products for implementing cross-platform mixed-reality applications with a scripting framework
Shaer et al. A specification paradigm for the design and implementation of tangible user interfaces
KR20120045744A (en) An apparatus and method for authoring experience-based learning content
CN113610984B (en) Augmented reality method based on hollens 2 holographic glasses
CN107861714B (en) Development method and system of automobile display application based on Intel RealSense
CN112711458B (en) Method and device for displaying prop resources in virtual scene
US9508178B2 (en) Flexible 3-D character rigging blocks with interface obligations
Güler et al. Developing an CNC lathe augmented reality application for industrial maintanance training
Dörner et al. Content creation and authoring challenges for virtual environments: from user interfaces to autonomous virtual characters
US11625900B2 (en) Broker for instancing
Ledermann An authoring framework for augmented reality presentations
Deshayes et al. Statechart modelling of interactive gesture-based applications
Oliveira et al. Creation and visualization of context aware augmented reality interfaces
Dontschewa et al. Using motion capturing sensor systems for natural user interface
Green et al. The grappl 3d interaction technique library
Levas et al. An architecture and framework for steerable interface systems
Feuerstack et al. Model-Based Design of Interactions That can Bridge Realities-The Augmented" Drag-and-Drop"
Du et al. Study on Virtual Assembly for Satellite Based on Augmented Reality
CN114931746B (en) Interaction method, device and medium for 3D game based on pen type and touch screen interaction
Lu et al. Interactive Augmented Reality Application Design based on Mobile Terminal
Manuri et al. Storytelling in the Metaverse: From Desktop to Immersive Virtual Reality Storyboarding
Hidayat BUILDING A VIRTUAL HAND FOR SIMULATION

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant