CN112767766A - Augmented reality interface training method, device, equipment and storage medium - Google Patents

Augmented reality interface training method, device, equipment and storage medium Download PDF

Info

Publication number
CN112767766A
CN112767766A CN202110093991.2A CN202110093991A CN112767766A CN 112767766 A CN112767766 A CN 112767766A CN 202110093991 A CN202110093991 A CN 202110093991A CN 112767766 A CN112767766 A CN 112767766A
Authority
CN
China
Prior art keywords
option
target
training
user
selection operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110093991.2A
Other languages
Chinese (zh)
Inventor
张二阳
李志帅
贺利利
刘帅
张冶
辛青青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou J&T Hi Tech Co Ltd
Original Assignee
Zhengzhou J&T Hi Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou J&T Hi Tech Co Ltd filed Critical Zhengzhou J&T Hi Tech Co Ltd
Priority to CN202110093991.2A priority Critical patent/CN112767766A/en
Publication of CN112767766A publication Critical patent/CN112767766A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an augmented reality interface training method, device, equipment and storage medium, and belongs to the technical field of augmented reality. The method comprises the following steps: in response to a user gesture selection operation on a training option on a virtual interface of an AR image of an augmented reality device, displaying a target training interface on the virtual interface, the training option including at least: the structure is cognitive to be selected, independently drills the option, the real option of instructing of teaching, the model sets up the option, and the target training interface includes: the AR image comprises a real image and a virtual interface of a scene where a user is located currently; and responding to the gesture selection operation of the user for the target operation area, and controlling the target control in the structure display area to execute the corresponding training action. The training efficiency of the electric affair operation and maintenance personnel can be improved.

Description

Augmented reality interface training method, device, equipment and storage medium
Technical Field
The application relates to the technical field of augmented reality, in particular to an augmented reality interface training method, device, equipment and storage medium.
Background
The electric service operation and maintenance personnel mainly are workers who carry out maintenance, fault dispatching and other work on the intelligent train control system, and in order to enable the electric service operation and maintenance personnel to know equipment workpieces frequently encountered in work, for example: bogie, automatic coupler, etc. usually will be trained to know the working principle of these equipment work pieces more accurately.
In the prior art, theoretical knowledge training is mainly performed on electric affair operation and maintenance personnel, or training is performed based on modes such as multimedia and the like, however, the training modes can not realize direct interaction between the electric affair operation and maintenance personnel and equipment workpieces, so that the operation capability of the electric affair operation and maintenance personnel on the equipment workpieces cannot be improved, and the training efficiency is relatively low.
Disclosure of Invention
The application aims to provide a method, a device, equipment and a storage medium for training an augmented reality interface, which can improve the training authenticity of electric affair operation and maintenance personnel and increase the training efficiency.
The embodiment of the application is realized as follows:
in one aspect of the embodiments of the present application, an augmented reality training method is provided, where the method is applied to augmented reality equipment, and the method includes:
in response to a user gesture selection operation on a training option on a virtual interface of an AR image of an augmented reality device, displaying a target training interface on the virtual interface, the training option including at least: the structure is cognitive to be selected, independently drills the option, the real option of instructing of teaching, the model sets up the option, and the target training interface includes: the AR image comprises a real image and a virtual interface of a scene where a user is located currently;
and responding to the gesture selection operation of the user for the target operation area, and controlling the target control in the structure display area to execute the corresponding training action.
Optionally, in response to a gesture selection operation of a training option on an AR image of the augmented reality device by a user, displaying a target training interface on the virtual interface, including:
responding to gesture selection operation of a user for the structure recognition option, displaying a first target training interface corresponding to the structure recognition option on the virtual interface, wherein a target operation area of the first target training interface comprises: a plurality of training sub-options, the training sub-options including at least one of: a function introduction option, an explosion diagram option, a component decomposition demonstration option, a component assembly demonstration option and a working principle option;
responding to gesture selection operation of a user for the target operation area, controlling a target control in the structure display area to execute a corresponding training action, wherein the training action comprises the following steps:
responding to the selection operation of the user for the target sub-option, and displaying the second-level sub-option or the text content corresponding to the target sub-option in the target operation area;
and responding to the selection operation of the user for the second-level sub-option, and controlling the target control in the structure display area to execute the training action corresponding to the second-level sub-option.
Optionally, in response to a selection operation of the user for the target sub-option, displaying the second-level sub-option or text content corresponding to the target sub-option in the target operation area, including:
in response to the selection operation of the user for the option of the explosion chart, displaying a second-level sub-option corresponding to the option of the explosion chart in the target operation area, wherein the second-level sub-option corresponding to the option of the explosion chart comprises: expanding options and aggregating options;
responding to the selection operation of the user for the second-level sub-option, controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option, and comprising the following steps:
and in response to the selection operation of the user for the expansion option or the aggregation option, controlling a target control in the structure display area to execute the expanded or aggregated training action.
Optionally, in response to a selection operation of the user for the target sub-option, displaying the second-level sub-option or text content corresponding to the target sub-option in the target operation area, including:
in response to the selection operation of the user for the part decomposition demonstration option or the part assembly demonstration option, displaying a second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option in the target operation area, wherein the second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option comprises: a start option, a pause option, a stop option, a fast forward option, a fast rewind option;
responding to the selection operation of the user for the second-level sub-option, controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option, and comprising the following steps:
and in response to the selection operation of the user for the second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option, controlling the target control in the structure display area to execute the training actions of starting, pausing, stopping, fast forwarding or fast rewinding.
Optionally, in response to a gesture selection operation of a user on a training option on a virtual interface of an AR image of the augmented reality device, displaying a target training interface on the virtual interface, including:
responding to gesture selection operation of a user for the autonomous drilling option, and displaying a second target training interface corresponding to the structural cognitive option, wherein a target operation area of the second target training interface comprises: a component disassembly execution option and a component assembly execution option;
responding to gesture selection operation of a user for the target operation area, controlling a target control in the structure display area to execute a corresponding training action, wherein the training action comprises the following steps:
responding to the selection operation of the user for the target operation area of the second target training interface, and displaying the corresponding target control structural state;
and responding to the execution operation of the user aiming at the structural state of the target control, and controlling the structural state of the target control to be switched to the target state.
Optionally, in response to a selection operation of the user for a target operation region of the second target training interface, displaying a corresponding target control structure state, including:
responding to the selection operation of the user for the component decomposition execution option, and displaying a first structural state of a target control corresponding to the component decomposition execution option;
responding to the execution operation of the user for the structural state of the target control, and controlling the structure of the target control to be switched to the target state, wherein the control comprises the following steps:
and in response to the execution operation of the user for the first structural state of the target control, controlling the target control to be decomposed from the first structural state to obtain a second structural state.
Optionally, in response to a selection operation of the user for a target operation region of the second target training interface, displaying a corresponding target control structure state, including:
responding to the selection operation of the user aiming at the component assembly execution option, and displaying a second structure state of the target control corresponding to the component assembly execution option;
responding to the execution operation of the user for the structural state of the target control, and controlling the structural state of the target control to be switched to the target state, wherein the control comprises the following steps:
and responding to the execution operation of the user aiming at the second structural state of the target control, and controlling the target control to be assembled from the second structural state to obtain the first structural state.
In another aspect of the embodiments of the present application, an augmented reality training apparatus is provided, where the apparatus is applied to augmented reality devices, and the apparatus includes: the device comprises a first response module and a second response module;
a first response module, configured to display a target training interface on a virtual interface of an AR image of an augmented reality device in response to a gesture selection operation of a user on the training option on the virtual interface, where the training option at least includes: the structure is cognitive to be selected, independently drills the option, the real option of instructing of teaching, the model sets up the option, and the target training interface includes: the AR image comprises a real image and a virtual interface of a scene where a user is located currently;
and the second response module is used for responding to the gesture selection operation of the user for the target operation area and controlling the target control in the structure display area to execute the corresponding training action.
Optionally, the first response module is specifically configured to, in response to a gesture selection operation of the user for the structure recognition option, display a first target training interface corresponding to the structure recognition option on the virtual interface, where a target operation area of the first target training interface includes: a plurality of training sub-options, the training sub-options including at least one of: a function introduction option, an explosion diagram option, a component decomposition demonstration option, a component assembly demonstration option and a working principle option; the second response module is specifically used for responding to the selection operation of the user for the target sub-option and displaying the second-level sub-option or the text content corresponding to the target sub-option in the target operation area; and responding to the selection operation of the user for the second-level sub-option, and controlling the target control in the structure display area to execute the training action corresponding to the second-level sub-option.
Optionally, the second response module is specifically configured to, in response to a selection operation of the user on the explosion diagram option, display a second-level sub-option corresponding to the explosion diagram option in the target operation area, where the second-level sub-option corresponding to the explosion diagram option includes: expanding options and aggregating options; and in response to the selection operation of the user for the expansion option or the aggregation option, controlling a target control in the structure display area to execute the expanded or aggregated training action.
Optionally, the second response module is specifically configured to, in response to a selection operation of the user for the component decomposition demonstration option or the component assembly demonstration option, display a second-level sub-option corresponding to the component decomposition demonstration option or the component assembly demonstration option in the target operation area, where the second-level sub-option corresponding to the component decomposition demonstration option or the component assembly demonstration option includes: a start option, a pause option, a stop option, a fast forward option, a fast rewind option; and in response to the selection operation of the user for the second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option, controlling the target control in the structure display area to execute the training actions of starting, pausing, stopping, fast forwarding or fast rewinding.
Optionally, the first response module is specifically configured to display a second target training interface corresponding to the structure recognition option in response to a gesture selection operation of the user for the autonomous drilling option, where a target operation area of the second target training interface includes: a component disassembly execution option and a component assembly execution option; the second response module is specifically used for responding to the selection operation of the user for the target operation area of the second target training interface and displaying the corresponding target control structure state; and responding to the execution operation of the user aiming at the structural state of the target control, and controlling the structural state of the target control to be switched to the target state.
Optionally, the second response module is specifically configured to, in response to a selection operation of the user for the component decomposition execution option, display a first structural state of the target control corresponding to the component decomposition execution option; and in response to the execution operation of the user for the first structural state of the target control, controlling the target control to be decomposed from the first structural state to obtain a second structural state.
Optionally, the second response module is specifically configured to, in response to a selection operation of the user for the component assembly execution option, display a second structural state of the target control corresponding to the component assembly execution option; and responding to the execution operation of the user aiming at the second structural state of the target control, and controlling the target control to be assembled from the second structural state to obtain the first structural state.
In another aspect of the embodiments of the present application, there is provided a computer device, including: the augmented reality training method comprises a memory and a processor, wherein a computer program capable of running on the processor is stored in the memory, and when the processor executes the computer program, the steps of the augmented reality training method are realized.
In another aspect of the embodiments of the present application, a storage medium of a computer is provided, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the augmented reality training method are implemented.
The beneficial effects of the embodiment of the application include:
in the augmented reality interface training method, device, equipment and storage medium provided by the embodiment of the application, the target training interface can be displayed on the virtual interface in response to the gesture selection operation of a user on the training option on the virtual interface of the AR image of the augmented reality equipment, and the target control in the structure display area is controlled to execute the corresponding training action in response to the gesture selection operation of the user on the target operation area. Wherein the target training interface comprises: target operation area, structure display area. Through carrying out operations such as relevant control selection to the target operation area, the target control in the structure display area can be controlled to carry out corresponding teaching show and training processes, and then the electric affair operation and maintenance personnel can know the working principle of the entity equipment workpiece corresponding to the target control more clearly through the show training process of the target control in the structure display area, and then the training efficiency of the electric affair operation and maintenance personnel can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic display diagram of an AR image provided in an embodiment of the present application;
fig. 2 is a first flowchart of an augmented reality training method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a first goal training interface provided in an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a second augmented reality training method according to an embodiment of the present application;
FIG. 5 is a schematic illustration of a display of a first goal training interface for the exploded view sub-option provided by an embodiment of the present application;
fig. 6 is a third schematic flowchart of an augmented reality training method provided in the embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a display of a first goal training interface of a part disassembly demonstration sub-option provided by an embodiment of the present application;
fig. 8 is a fourth schematic flowchart of an augmented reality training method according to an embodiment of the present application;
FIG. 9 is a schematic illustration of a display of a second goal training interface provided by an embodiment of the present application;
fig. 10 is a schematic flowchart of a fifth augmented reality training method provided in an embodiment of the present application;
fig. 11 is a sixth schematic flowchart of an augmented reality training method according to an embodiment of the present application;
fig. 12 is a seventh flowchart illustrating an augmented reality training method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an augmented reality training apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it is noted that the terms "first", "second", "third", and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance.
It should be noted that the user appearing in the embodiment of the present application is an electric service operation and maintenance person mentioned in the foregoing background.
Specific display contents of the virtual interface in the AR image provided in the embodiment of the present application are explained below.
Fig. 1 is a schematic display diagram of an AR image provided in an embodiment of the present application, please refer to fig. 1, where the AR image includes a real image and a virtual interface of a current scene of a user. The virtual interface comprises a plurality of training options, and the training options at least comprise: the training system comprises a structure cognition option, an autonomous exercise option, a teaching training option and a model setting option.
Optionally, the Augmented Reality training method provided by the present application is applied to an Augmented Reality device, and the Augmented Reality device may specifically be an AR (Augmented Reality) device, for example: AR helmets, AR glasses, etc. The AR device may display an AR image, where the AR image may include a real image and a virtual interface of a scene where the user is currently located, where the real image is the current scene acquired by a camera on the AR device, for example: a training room for electric operation and maintenance personnel, or a real train working scene and the like; the virtual interface may be an electronic interface displayed on the real image, and the interface may be provided with a plurality of training options, and the specific representation form of the interface may be a displayed virtual key. On the AR image, a virtual interface may be displayed on the basis of the real image.
Alternatively, the virtual interface may be disposed at any position on the real image, and the user may make corresponding adjustments according to the vision, as shown in fig. 1, the virtual interface 120 is disposed on the real interface 110, the virtual interface may have a certain transparency, so that the user has a sense of augmented reality, and the virtual interface 120 may be disposed with a plurality of training options, for example: the training system comprises a structure cognition option, an autonomous drilling option, a teaching training option, a model setting option and the like.
Optionally, the AR device may obtain a gesture motion position and the like of a user wearing the AR device through a camera or other video capturing apparatus, and the user may click a training option in the virtual interface with a hand to realize corresponding selection.
The following specifically explains a specific implementation process of the augmented reality training method provided in the embodiment of the present application.
Fig. 2 is a first schematic flow chart of an augmented reality training method according to an embodiment of the present application, please refer to fig. 2, where the method includes:
s210: in response to a user gesture selection operation on a training option on a virtual interface of an AR image of an augmented reality device, a target training interface is displayed on the virtual interface.
Wherein the target training interface comprises: target operation area, structure display area.
After the user wears the augmented reality equipment and begins training, the augmented reality equipment can generate a virtual interface, meanwhile, a real image of a scene where the user is located is captured, the virtual interface and the real image are overlapped to form the AR image and displayed, and the user can see the AR image.
Optionally, a user may determine one of the training options on the virtual interface through a gesture to select the training option, the enhanced display device may obtain a gesture action position of the user wearing the device through a camera or other video capture devices to determine the selection performed by the user, and may further display a target training interface on the virtual interface in response to a gesture selection operation of the training option on the virtual interface by the user, where the target training interface may be an interface displayed after selecting an option such as a structure recognition option, an autonomous exercise option, a training teaching option, or a model setting option.
For example: the user can select the structure recognition option through the gesture, and then the target training interface corresponding to the structure recognition option can be displayed in response to the selection operation.
Optionally, the target operation area in the target training interface may be an interface area where a user may perform a selection operation or trigger an operation through a gesture; the structure display area can be an interface area for showing the related target control in response to the selection operation or the trigger operation of the user.
S220: and responding to the gesture selection operation of the user for the target operation area, and controlling the target control in the structure display area to execute the corresponding training action.
Optionally, after the target training interface is displayed, the user may select the relevant options in the target operation area through gesture selection operations in a manner similar to S210, and may further control the target control in the structure display area to execute the corresponding training action. Wherein, the target control can be a specific equipment workpiece for the electric operation and maintenance personnel to learn and train in the training process, for example: bogie, automatic hook, etc. The corresponding training action may be a corresponding training action performed according to a specific selection option, for example: structural display, structural decomposition, structural combination, and the like, without limitation.
In the augmented reality interface training method provided by the embodiment of the application, a target training interface can be displayed on a virtual interface in response to a gesture selection operation of a user on a training option on the virtual interface of an AR image of augmented reality equipment, and a target control in a structure display area is controlled to execute a corresponding training action in response to the gesture selection operation of the user on the target operation area. Wherein the target training interface comprises: target operation area, structure display area. Through carrying out operations such as relevant control selection to the target operation area, the target control in the structure display area can be controlled to carry out corresponding teaching show and training processes, and then the electric affair operation and maintenance personnel can know the working principle of the entity equipment workpiece corresponding to the target control more clearly through the show training process of the target control in the structure display area, and then the training efficiency of the electric affair operation and maintenance personnel can be improved.
The following explains the specific contents presented in the first goal training interface provided in the embodiment of the present application.
Fig. 3 is a schematic display diagram of a first target training interface according to an embodiment of the present disclosure, and referring to fig. 3, a target operation area of the first target training interface includes: a plurality of training sub-options, the training sub-options including at least one of: a function introduction option, an explosion diagram option, a component decomposition demonstration option, a component assembly demonstration option, and a working principle option.
Alternatively, the first goal training interface may be an interface presented by the user after selecting the structure awareness option in the virtual interface.
The function introduction options can be options for introducing functions of the entity device workpieces corresponding to the target control; the explosion diagram option can be an option for showing the explosion diagram structure of the physical equipment workpiece corresponding to the target control; the part decomposition demonstration option and the part assembly demonstration option are options for displaying a decomposition process and an assembly process of the entity equipment workpiece corresponding to the target control respectively; the working principle options may be options for introducing specific working principles of the physical device artifact corresponding to the target control.
Another specific implementation process of the augmented reality training method provided in the embodiment of the present application is explained below.
Fig. 4 is a second flowchart of the augmented reality training method according to the embodiment of the present application, please refer to fig. 4, where a target training interface is displayed on a virtual interface in response to a gesture selection operation of a user on a training option on an AR image of an augmented reality device, where the method includes:
s410: and responding to the gesture selection operation of the user for the structure cognition option, and displaying a first target training interface corresponding to the structure cognition option on the virtual interface.
Responding to gesture selection operation of a user for the target operation area, controlling a target control in the structure display area to execute a corresponding training action, wherein the training action comprises the following steps:
s420: and responding to the selection operation of the user for the target sub-option, and displaying the second-level sub-option or the text content corresponding to the target sub-option in the target operation area.
Optionally, the user may select one of the target sub-options from the target operation area by a gesture motion, where the target sub-option is one of the training sub-options, and in response to the selection operation, the AR device may display, in the target operation area, a second-level sub-option corresponding to the target sub-option or text content corresponding to the target sub-option.
For example: if the target sub-option is a 'function introduction option' or a 'working principle option', corresponding text content can be displayed; if the target sub-option is an "explosion diagram option", "component decomposition presentation option", or "component assembly presentation option", the corresponding second level sub-option and the associated text content may be displayed. The display mode is only one of the settings, and the specific display can be adjusted according to the requirements of the user, which is not limited herein.
S430: and responding to the selection operation of the user for the second-level sub-option, and controlling the target control in the structure display area to execute the training action corresponding to the second-level sub-option.
Optionally, the user may select related content in the second-level sub-option according to an actual training requirement, and further may control the target control in the structure display area to execute a training action corresponding to the second-level sub-option. The specific training action to be performed and the corresponding relationship with the second level sub-option may be preset by the user, and is not limited herein.
The following explains the specific display contents of the first goal training interface of the explosion diagram sub-option provided in the embodiment of the present application.
Fig. 5 is a schematic display diagram of a first target training interface of an explosion-chart sub-option provided in an embodiment of the present application, please refer to fig. 5, where the first target training interface of the explosion-chart sub-option is the first target training interface under the condition that the explosion-chart sub-option is selected as the target sub-option, and in the interface, a second-level sub-option corresponding to the explosion-chart option includes: expansion options, aggregation options.
The expansion option can be a control option for controlling the target control to expand in an explosion diagram structure; the aggregation option may be a control option to aggregate if the exploded view structure is deployed.
Taking the selection of the expansion option as an example in fig. 5, after the user selects the expansion option, the target control in the structure display area may be expanded in an explosion diagram structure, and each part of the target control is displayed respectively.
Next, a further specific implementation process of the augmented reality training method provided in the embodiment of the present application is explained.
Fig. 6 is a third flowchart of the augmented reality training method provided in the embodiment of the present application, please refer to fig. 6, where in response to a selection operation of a user for a target sub-option, a second-level sub-option or text content corresponding to the target sub-option is displayed in a target operation region, including:
s610: and responding to the selection operation of the user for the option of the explosion chart, and displaying a second-level sub option corresponding to the option of the explosion chart in the target operation area.
Optionally, after the user selects the option of the explosion diagram, the second level sub-option corresponding to the option of the explosion diagram, that is, the expansion option and the aggregation option, may be displayed in the target operation area.
Responding to the selection operation of the user for the second-level sub-option, controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option, and comprising the following steps:
s620: and in response to the selection operation of the user for the expansion option or the aggregation option, controlling a target control in the structure display area to execute the expanded or aggregated training action.
Optionally, in a structure display area in the interface, the target control may perform corresponding display according to a specific selection. For example: if the user selects the expansion option through the gesture action, the target control in the structure display area can be expanded in an explosion diagram structure; if the user selects the aggregation option through a gesture action, the target controls in the structure display area may be aggregated.
Optionally, during the demonstration, the name of each part in the target control explosion diagram can be displayed.
The specific display contents of the first goal training interface of the part decomposition demonstration sub-option provided in the embodiment of the present application are explained below.
Fig. 7 is a display schematic diagram of a first target training interface of a component decomposition demonstration sub-option provided in an embodiment of the present application, please refer to fig. 7, where the first target training interface of the component decomposition demonstration sub-option is the first target training interface under the condition that the component decomposition demonstration sub-option is selected as the target sub-option, and a second level sub-option corresponding to the component decomposition demonstration option in the interface includes: start option, pause option, stop option, fast forward option, fast rewind option.
Optionally, the second-level sub-option corresponding to the component assembly demonstration sub-option is the same as the second-level sub-option corresponding to the component decomposition demonstration sub-option, which is not described herein again, and fig. 7 only takes the component decomposition demonstration sub-option as an example.
Optionally, before the selection operation is not performed, a start option, a stop option, a fast forward option, and a fast reverse option may be displayed on the interface. When the user selects the start option through a gesture action, the start option may be replaced with a pause option.
The starting option is specifically used for enabling the target control to start component decomposition demonstration; the pause option is specifically for pausing the current presentation interface; the stop option is specifically used for stopping the target control from performing component decomposition demonstration; the fast forward option and the fast reverse option may be for speeding up and slowing down the presentation, respectively.
Correspondingly, for the sub option of the part assembly demonstration, the starting option is specifically used for enabling the target control to start the part assembly demonstration; the pause option is specifically for pausing the current presentation interface; the stop option is specifically used for stopping the target control from performing component assembly demonstration; the fast forward option and the fast reverse option may be for speeding up and slowing down the presentation, respectively.
Next, a further specific implementation process of the augmented reality training method provided in the embodiment of the present application is specifically explained.
Fig. 8 is a fourth flowchart of the augmented reality training method provided in the embodiment of the present application, please refer to fig. 8, where in response to a selection operation of a user for a target sub-option, a second-level sub-option or text content corresponding to the target sub-option is displayed in a target operation region, including:
s810: and in response to the selection operation of the user for the part decomposition demonstration option or the part assembly demonstration option, displaying a second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option in the target operation area.
Optionally, after the user selects the part disassembly demonstration option or the part assembly demonstration option, a second level sub-option corresponding to the part disassembly demonstration option or the part assembly demonstration option, that is, the start option, the pause option, the stop option, the fast forward option, and the fast backward option, may be displayed in the target operation area.
Responding to the selection operation of the user for the second-level sub-option, controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option, and comprising the following steps:
s820: and in response to the selection operation of the user for the second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option, controlling the target control in the structure display area to execute the training actions of starting, pausing, stopping, fast forwarding or fast rewinding.
Optionally, in a structure display area in the interface, the target control may perform corresponding display according to a specific selection. For example: if the user selects the start option through the gesture action, the target control in the structure display area can start part decomposition demonstration or part assembly demonstration; if the user selects the pause option through the gesture action, the target control in the structure display area can pause the demonstration; if the user selects the stop option through the gesture action, the target control in the structure display area can stop the demonstration content and restore the target control to the initial state; if the user selects the fast forward option through gesture action, the target control in the structure display area can accelerate the speed of demonstration; if the user selects the fast-reverse option through a gesture, the target control in the structure display area may slow down the speed of the presentation.
The following explains the specific contents presented in the second goal training interface provided in the embodiment of the present application.
Fig. 9 is a schematic display diagram of a second goal training interface according to an embodiment of the present disclosure, and referring to fig. 9, a goal operating area of the second goal training interface includes: a component disassembly execution option, a component assembly execution option.
Alternatively, the second goal training interface may be an interface presented by the user after selecting the autonomous drill option in the virtual interface.
The component decomposition execution option may be similar to the component decomposition demonstration option, and is a process for demonstrating component decomposition, except that the component decomposition execution option may be decomposed in sequence according to steps, for example: and responding to the sequential selection operation of the user, and executing one-step decomposition. Accordingly, the component assembly execution options may be similar to the component assembly demonstration options previously described, all for demonstrating the process of component assembly, except that the component assembly execution options may be assembled in sequence in steps, such as: and responding to the sequential selection operation of the user, and executing one-step assembly.
The following explains a further specific implementation process of the augmented reality training method provided in the embodiment of the present application.
Fig. 10 is a flowchart illustrating a fifth augmented reality training method provided in an embodiment of the present application, please refer to fig. 10, where in response to a gesture selection operation of a user on a training option on a virtual interface of an AR image of an augmented reality device, a target training interface is displayed on the virtual interface, including:
s1010: and responding to the gesture selection operation of the user for the autonomous drilling option, and displaying a second target training interface corresponding to the structure cognitive option.
Responding to gesture selection operation of a user for the target operation area, controlling a target control in the structure display area to execute a corresponding training action, wherein the training action comprises the following steps:
s1020: and responding to the selection operation of the user aiming at the target operation area of the second target training interface, and displaying the corresponding target control structure state.
Optionally, the component decomposition execution option and the component assembly execution option respectively correspond to a structural state of a target control, and after the user selects the corresponding option through a gesture motion, the structural state corresponding to the option may be displayed in the structural display area.
S1030: and responding to the execution operation of the user aiming at the structural state of the target control, and controlling the structural state of the target control to be switched to the target state.
Optionally, after the user has selected the corresponding option, an execution operation option may pop up in the interface, and the user may click the current virtual interface through a gesture action, and then may perform the corresponding execution operation, and the AR device may control the structural state of the target control to be switched to the target state in response to the execution operation of the user for the structural state of the target control, where the target state and the structural state of the target control are two different states, for example: specifically, the target control may be switched from the disassembled state to the assembled state or from the assembled state to the disassembled state.
Another specific implementation procedure in the augmented reality training method provided in the embodiment of the present application is specifically explained below.
Fig. 11 is a sixth flowchart of the augmented reality training method according to the embodiment of the present application, please refer to fig. 11, where responding to a selection operation of a user on a target operation region of a second target training interface, displaying a corresponding target control structure state includes:
s1110: and responding to the selection operation of the user for the component decomposition execution option, and displaying the first structural state of the target control corresponding to the component decomposition execution option.
Alternatively, the first structural state may be an assembled state of the target control.
Responding to the execution operation of the user for the structural state of the target control, and controlling the structure of the target control to be switched to the target state, wherein the control comprises the following steps:
s1120: and in response to the execution operation of the user for the first structural state of the target control, controlling the target control to be decomposed from the first structural state to obtain a second structural state.
Optionally, the second structural state may be a decomposition state of the target control, and in a specific implementation process, the target control needs to perform a plurality of decomposition steps from the assembly state to the decomposition state, for example, for the target control, which is an automatic coupler, a plurality of parts on the automatic coupler are decomposed in sequence in the decomposition process, and each time a user performs an operation of performing a gesture action, the one-step decomposition may be performed, and after the user performs the operation for a plurality of times, the target control may be decomposed from the assembly state to the decomposition state.
Next, a further specific implementation process in the augmented reality training method provided in the embodiment of the present application is specifically explained.
Fig. 12 is a seventh flowchart illustrating an augmented reality training method according to an embodiment of the present application, please refer to fig. 12, where responding to a selection operation of a user on a target operation region of a second target training interface, displaying a corresponding target control structure state includes:
s1210: and responding to the selection operation of the user for the component assembly execution option, and displaying the second structure state of the target control corresponding to the component assembly execution option.
Optionally, the second structural state is the aforementioned decomposed state.
Responding to the execution operation of the user for the structural state of the target control, and controlling the structural state of the target control to be switched to the target state, wherein the control comprises the following steps:
s1220: and responding to the execution operation of the user aiming at the second structural state of the target control, and controlling the target control to be assembled from the second structural state to obtain the first structural state.
Optionally, the process of S1210-S1220 is similar to the process of S1110-S1120, except that S1210-S1220 is an assembling process, and S1110-S1120 is a disassembling process, which is not described herein again for details of the assembling process.
Optionally, if a teaching and training option is selected, a third target training interface may be displayed, and the user may perform an operation on a related option in the third target training interface through a gesture action, so as to implement AR operation contact, AR operation examination, completion of a related AR operation task, and the like on the target control. The teaching personnel can also set a corresponding AR exercise task through the third target training interface so as to provide the user with simulation training.
Optionally, if the model setting option is selected, parameters such as a display position and a display size of the target control may be set to better conform to a use habit of a current user, and other options in the interface may also be set, which is not limited herein.
Optionally, in addition to the demonstration process, examination work on the electric affair operation and maintenance staff or a fault condition of a target control corresponding to the electric affair-related device may be displayed through the virtual interface, so that the electric affair operation and maintenance staff performs operations such as troubleshooting on the target control displayed in the virtual interface through gesture actions.
Optionally, the AR device may further have a voice prompt function, and when the relevant text information is displayed through the virtual interface, the relevant text information may be also subjected to voice reading through the voice prompt function, so that the user can understand the text information more conveniently.
The following describes apparatuses, devices, and storage media for executing the augmented reality training method provided by the present application, and specific implementation processes and technical effects thereof are referred to above, and will not be described again below.
Fig. 13 is a schematic structural diagram of an augmented reality training apparatus provided in an embodiment of the present application, please refer to fig. 13, where the apparatus is applied to an augmented reality device, and the apparatus includes: a first response module 100, a second response module 200;
a first response module 100, configured to display a target training interface on a virtual interface of an AR image of an augmented reality device in response to a gesture selection operation of a user on the training option on the virtual interface, where the training option at least includes: the structure is cognitive to be selected, independently drills the option, the real option of instructing of teaching, the model sets up the option, and the target training interface includes: the AR image comprises a real image and a virtual interface of a scene where a user is located currently;
and a second response module 200, configured to control the target control in the structure display area to execute a corresponding training action in response to a gesture selection operation of the user for the target operation area.
Optionally, the first response module 100 is specifically configured to, in response to a gesture selection operation of the user for the structure recognition option, display a first target training interface corresponding to the structure recognition option on the virtual interface, where a target operation area of the first target training interface includes: a plurality of training sub-options, the training sub-options including at least one of: a function introduction option, an explosion diagram option, a component decomposition demonstration option, a component assembly demonstration option and a working principle option; the second response module 200 is specifically configured to, in response to a selection operation of a user for a target sub-option, display a second-level sub-option or text content corresponding to the target sub-option in a target operation area; and responding to the selection operation of the user for the second-level sub-option, and controlling the target control in the structure display area to execute the training action corresponding to the second-level sub-option.
Optionally, the second response module 200 is specifically configured to, in response to a selection operation of the user on an explosion diagram option, display a second-level sub-option corresponding to the explosion diagram option in the target operation area, where the second-level sub-option corresponding to the explosion diagram option includes: expanding options and aggregating options; and in response to the selection operation of the user for the expansion option or the aggregation option, controlling a target control in the structure display area to execute the expanded or aggregated training action.
Optionally, the second response module 200 is specifically configured to, in response to a selection operation of the user for the component decomposition demonstration option or the component assembly demonstration option, display a second-level sub-option corresponding to the component decomposition demonstration option or the component assembly demonstration option in the target operation area, where the second-level sub-option corresponding to the component decomposition demonstration option or the component assembly demonstration option includes: a start option, a pause option, a stop option, a fast forward option, a fast rewind option; and in response to the selection operation of the user for the second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option, controlling the target control in the structure display area to execute the training actions of starting, pausing, stopping, fast forwarding or fast rewinding.
Optionally, the first response module 100 is specifically configured to display, in response to a gesture selection operation of the user for the autonomous drilling option, a second target training interface corresponding to the structural recognition option, where a target operation area of the second target training interface includes: a component disassembly execution option and a component assembly execution option; the second response module 200 is specifically configured to respond to a selection operation of a user for a target operation area of the second target training interface, and display a corresponding target control structural state; and responding to the execution operation of the user aiming at the structural state of the target control, and controlling the structural state of the target control to be switched to the target state.
Optionally, the second response module 200 is specifically configured to, in response to a selection operation of the user for the component decomposition execution option, display a first structural state of the target control corresponding to the component decomposition execution option; and in response to the execution operation of the user for the first structural state of the target control, controlling the target control to be decomposed from the first structural state to obtain a second structural state.
Optionally, the second response module 200 is specifically configured to, in response to a selection operation of the user for the component assembly execution option, display a second structural state of the target control corresponding to the component assembly execution option; and responding to the execution operation of the user aiming at the second structural state of the target control, and controlling the target control to be assembled from the second structural state to obtain the first structural state.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 14 is a schematic structural diagram of a computer device according to an embodiment of the present application, please refer to fig. 14, where the computer device includes: the memory 300 and the processor 400, wherein the memory 300 stores a computer program operable on the processor 400, and the processor 400 executes the computer program to implement the steps of the augmented reality training method.
In another aspect of the embodiments of the present application, a storage medium of a computer is further provided, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the augmented reality training method are implemented.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An augmented reality training method is applied to augmented reality equipment, and comprises the following steps:
in response to a user gesture selection operation on a training option on a virtual interface of an AR image of an augmented reality device, displaying a target training interface on the virtual interface, the training option including at least: the structure is cognitive to be selected, independently drills the option, the real option of instructing of teaching, the model sets up the option, the target training interface includes: the AR image comprises a real image of a scene where a user is located currently and the virtual interface;
and responding to the gesture selection operation of the user for the target operation area, and controlling the target control in the structure display area to execute the corresponding training action.
2. The method of claim 1, wherein the displaying a target training interface on the virtual interface in response to a user gesture selection operation on a training option on an AR image of an augmented reality device comprises:
responding to gesture selection operation of a user for the structure recognition option, and displaying a first target training interface corresponding to the structure recognition option on the virtual interface, wherein a target operation area of the first target training interface comprises: a plurality of training sub-options, the training sub-options including at least one of: a function introduction option, an explosion diagram option, a component decomposition demonstration option, a component assembly demonstration option and a working principle option;
the responding to the gesture selection operation of the user for the target operation area, and controlling the target control in the structure display area to execute the corresponding training action, including:
responding to the selection operation of a user for the target sub-option, and displaying a second-level sub-option or text content corresponding to the target sub-option in the target operation area;
and responding to the selection operation of the user for the second-level sub-option, and controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option.
3. The method of claim 2, wherein in response to a user selection operation for the target sub-option, displaying a second level sub-option or text content corresponding to the target sub-option in the target operation area comprises:
in response to the selection operation of the user for the explosion diagram option, displaying a second-level sub-option corresponding to the explosion diagram option in the target operation area, wherein the second-level sub-option corresponding to the explosion diagram option comprises: expanding options and aggregating options;
the responding to the selection operation of the user for the second-level sub-option, and controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option, including:
and controlling a target control in the structure display area to execute the expanded or aggregated training action in response to the selection operation of the user for the expansion option or the aggregation option.
4. The method of claim 2, wherein in response to a user selection operation for the target sub-option, displaying a second level sub-option or text content corresponding to the target sub-option in the target operation area comprises:
in response to the selection operation of the user for the component decomposition demonstration option or the component assembly demonstration option, displaying a second-level sub-option corresponding to the component decomposition demonstration option or the component assembly demonstration option in the target operation area, wherein the second-level sub-option corresponding to the component decomposition demonstration option or the component assembly demonstration option comprises: a start option, a pause option, a stop option, a fast forward option, a fast rewind option;
the responding to the selection operation of the user for the second-level sub-option, and controlling a target control in the structure display area to execute a training action corresponding to the second-level sub-option, including:
and in response to the selection operation of the user on a second-level sub-option corresponding to the part decomposition demonstration option or the part assembly demonstration option, controlling a target control in the structure display area to execute a training action of starting, pausing, stopping, fast forwarding or fast rewinding.
5. The method of claim 1, wherein the displaying a target training interface on a virtual interface of an AR image of an augmented reality device in response to a gesture selection operation by a user of a training option on the virtual interface comprises:
responding to gesture selection operation of a user for the autonomous drilling option, and displaying a second target training interface corresponding to the structure recognition option, wherein a target operation area of the second target training interface comprises: a component disassembly execution option and a component assembly execution option;
the responding to the gesture selection operation of the user for the target operation area, and controlling the target control in the structure display area to execute the corresponding training action, including:
responding to the selection operation of the user for the target operation area of the second target training interface, and displaying the corresponding target control structural state;
and responding to the execution operation of the user aiming at the structural state of the target control, and controlling the structural state of the target control to be switched to the target state.
6. The method of claim 5, wherein the displaying a corresponding target control structure state in response to a user selection operation for a target operating region of the second target training interface comprises:
responding to the selection operation of the user for the part decomposition execution option, and displaying a first structural state of a target control corresponding to the part decomposition execution option;
the controlling the structure of the target control to be switched to the target state in response to the execution operation of the user for the structure state of the target control comprises:
and responding to the execution operation of the user for the first structural state of the target control, and controlling the target control to be decomposed from the first structural state to obtain a second structural state.
7. The method of claim 5, wherein the displaying a corresponding target control structure state in response to a user selection operation for a target operating region of the second target training interface comprises:
responding to the selection operation of the user for the component assembly execution option, and displaying a second structural state of a target control corresponding to the component assembly execution option;
the controlling the structural state of the target control to be switched to the target state in response to the execution operation of the user on the structural state of the target control comprises:
and responding to the execution operation of the user for the second structural state of the target control, and controlling the target control to be assembled from the second structural state to obtain the first structural state.
8. An augmented reality training apparatus, wherein the apparatus is applied to an augmented reality device, the apparatus comprising: the device comprises a first response module and a second response module;
the first response module is configured to display a target training interface on a virtual interface of an AR image of an augmented reality device in response to a gesture selection operation of a user on the training option on the virtual interface, where the training option at least includes: the structure is cognitive to be selected, independently drills the option, the real option of instructing of teaching, the model sets up the option, the target training interface includes: the AR image comprises a real image of a scene where a user is located currently and the virtual interface;
and the second response module is used for responding to the gesture selection operation of the user for the target operation area and controlling the target control in the structure display area to execute the corresponding training action.
9. A computer device, comprising: memory in which a computer program is stored which is executable on the processor, and a processor which, when executing the computer program, carries out the steps of the method according to any one of the preceding claims 1 to 7.
10. A storage medium of a computer, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110093991.2A 2021-01-22 2021-01-22 Augmented reality interface training method, device, equipment and storage medium Pending CN112767766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110093991.2A CN112767766A (en) 2021-01-22 2021-01-22 Augmented reality interface training method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110093991.2A CN112767766A (en) 2021-01-22 2021-01-22 Augmented reality interface training method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112767766A true CN112767766A (en) 2021-05-07

Family

ID=75706984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110093991.2A Pending CN112767766A (en) 2021-01-22 2021-01-22 Augmented reality interface training method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112767766A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202771664U (en) * 2012-09-12 2013-03-06 北京广达汽车维修设备有限公司 Full informationized theory and practice combined automobile practical training system
CN103180893A (en) * 2011-08-23 2013-06-26 索尼公司 Method and system for use in providing three dimensional user interface
CN105844992A (en) * 2016-05-23 2016-08-10 华北电力大学(保定) Inspection and repairing technology training and examination system and method for thermal generator set turbine
US20170061694A1 (en) * 2015-09-02 2017-03-02 Riccardo Giraldi Augmented reality control of computing device
CN106575043A (en) * 2014-09-26 2017-04-19 英特尔公司 Systems, apparatuses, and methods for gesture recognition and interaction
CN106935092A (en) * 2017-05-03 2017-07-07 武汉理工大学 A kind of virtual assembly system based on boat diesel engine
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality
CN107784885A (en) * 2017-10-26 2018-03-09 歌尔科技有限公司 Operation training method and AR equipment based on AR equipment
CN108008873A (en) * 2017-11-10 2018-05-08 亮风台(上海)信息科技有限公司 A kind of operation method of user interface of head-mounted display apparatus
CN109256001A (en) * 2018-10-19 2019-01-22 中铁第四勘察设计院集团有限公司 A kind of overhaul of train-set teaching training system and its Training Methodology based on VR technology
CN109407918A (en) * 2018-09-25 2019-03-01 苏州梦想人软件科技有限公司 The implementation method of augmented reality content multistage interactive mode
KR20190116601A (en) * 2018-04-03 2019-10-15 (주)다울디엔에스 Training system of a surgical operation using authoring tool based on augmented reality

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103180893A (en) * 2011-08-23 2013-06-26 索尼公司 Method and system for use in providing three dimensional user interface
CN202771664U (en) * 2012-09-12 2013-03-06 北京广达汽车维修设备有限公司 Full informationized theory and practice combined automobile practical training system
CN106575043A (en) * 2014-09-26 2017-04-19 英特尔公司 Systems, apparatuses, and methods for gesture recognition and interaction
US20170061694A1 (en) * 2015-09-02 2017-03-02 Riccardo Giraldi Augmented reality control of computing device
CN105844992A (en) * 2016-05-23 2016-08-10 华北电力大学(保定) Inspection and repairing technology training and examination system and method for thermal generator set turbine
CN106935092A (en) * 2017-05-03 2017-07-07 武汉理工大学 A kind of virtual assembly system based on boat diesel engine
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality
CN107784885A (en) * 2017-10-26 2018-03-09 歌尔科技有限公司 Operation training method and AR equipment based on AR equipment
CN108008873A (en) * 2017-11-10 2018-05-08 亮风台(上海)信息科技有限公司 A kind of operation method of user interface of head-mounted display apparatus
KR20190116601A (en) * 2018-04-03 2019-10-15 (주)다울디엔에스 Training system of a surgical operation using authoring tool based on augmented reality
CN109407918A (en) * 2018-09-25 2019-03-01 苏州梦想人软件科技有限公司 The implementation method of augmented reality content multistage interactive mode
CN109256001A (en) * 2018-10-19 2019-01-22 中铁第四勘察设计院集团有限公司 A kind of overhaul of train-set teaching training system and its Training Methodology based on VR technology

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
房顺沐: "移动增强现实产品装配三维交互方法", 《广东工业大学硕士论文》 *
房顺沐: "移动增强现实产品装配三维交互方法", 《广东工业大学硕士论文》, 31 May 2016 (2016-05-31), pages 49 *
龚雅琼: "基于增强现实技术的辅助维修系统设计与实现", 《东南大学硕士论文》 *
龚雅琼: "基于增强现实技术的辅助维修系统设计与实现", 《东南大学硕士论文》, 30 June 2020 (2020-06-30), pages 62 *

Similar Documents

Publication Publication Date Title
US11227439B2 (en) Systems and methods for multi-user virtual reality remote training
US20200310842A1 (en) System for User Sentiment Tracking
US20100125790A1 (en) Method, system and program for interactive assembly of a product
CN106781809A (en) A kind of training method and system for helicopter emergency management and rescue task
US20100156655A1 (en) Equipment area alarm summary display system and method
CN111709362B (en) Method, device, equipment and storage medium for determining important learning content
CN109144244A (en) A kind of method, apparatus, system and the augmented reality equipment of augmented reality auxiliary
CN110124307A (en) Method of controlling operation thereof and device, storage medium and electronic device
CN104881307B (en) Download implementation method and device
CN109324515A (en) A kind of method and controlling terminal controlling intelligent electric appliance
CN110989842A (en) Training method and system based on virtual reality and electronic equipment
CN113986111A (en) Interaction method, interaction device, electronic equipment and storage medium
CN109684177A (en) Information feedback method and device
CN112767766A (en) Augmented reality interface training method, device, equipment and storage medium
US20210098012A1 (en) Voice Skill Recommendation Method, Apparatus, Device and Storage Medium
CN110310352B (en) Role action editing method and device, computing equipment and storage medium
CN115904183A (en) Interface display process, apparatus, device and storage medium
CN105630634B (en) Application system calamity is for switching method and apparatus
CN109469962A (en) A kind of air-conditioning defrosting method, device and storage medium
WO2020060569A1 (en) System and method for importing a software application into a virtual reality setting
CN110377150B (en) Method and device for operating entity component in virtual scene and computer equipment
CN109729413B (en) Method and terminal for sending bullet screen
CN106792955A (en) Switching analog network method, device and terminal device
CN106339211A (en) Method and device for monitoring display inconsistence of remote service of intelligent terminal
CN210119872U (en) VR uses supervise device based on operation function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507