CN107463252B - Virtual model operation method and device of MR head display, storage medium and MR head display - Google Patents

Virtual model operation method and device of MR head display, storage medium and MR head display Download PDF

Info

Publication number
CN107463252B
CN107463252B CN201710576742.2A CN201710576742A CN107463252B CN 107463252 B CN107463252 B CN 107463252B CN 201710576742 A CN201710576742 A CN 201710576742A CN 107463252 B CN107463252 B CN 107463252B
Authority
CN
China
Prior art keywords
virtual model
gesture
model
input gesture
target virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710576742.2A
Other languages
Chinese (zh)
Other versions
CN107463252A (en
Inventor
余谦
方炬发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen guotengan Vocational Education Technology Co.,Ltd.
Original Assignee
Shenzhen Gta Education Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gta Education Tech Ltd filed Critical Shenzhen Gta Education Tech Ltd
Priority to CN201710576742.2A priority Critical patent/CN107463252B/en
Publication of CN107463252A publication Critical patent/CN107463252A/en
Application granted granted Critical
Publication of CN107463252B publication Critical patent/CN107463252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of virtual operation, and provides a virtual model operation method and device of an MR head display, a storage medium and the MR head display. The virtual model operation method of the MR head display comprises the following steps: detecting a currently input gesture; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be operated and controlled, and the convenience of user operation is greatly improved.

Description

Virtual model operation method and device of MR head display, storage medium and MR head display
Technical Field
The invention relates to the technical field of virtual operation, in particular to a virtual model operation method and device of an MR head display, a storage medium and the MR head display.
Background
The MR head display is a head-mounted display device in mixed reality, and a user wearing the MR head display can view a scene of superimposing virtual models on real images and can also carry out interactive control with the virtual models. However, the current MR head has very limited operation commands for the virtual model, and cannot realize operations with complex functions. For example, for an anatomizable or detachable virtual model, after the virtual model is detached, each detachable component can only be operated individually, and the whole virtual model (i.e., all detachable components) cannot be operated at the same time, which brings inconvenience to the user.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method, an apparatus, a storage medium, and an MR head display for operating a virtual model of an MR head display, which can simultaneously operate the entire virtual model when the virtual model is detached.
The first aspect of the embodiments of the present invention provides a virtual model operation method of an MR head display, including:
detecting a currently input gesture;
if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode.
A second aspect of the embodiments of the present invention provides a virtual model operating apparatus for an MR head display, including:
the gesture detection module is used for detecting a currently input gesture;
the model selection module is used for determining the whole target virtual model as an operation object operated by the current input gesture if the preset control model is selected by the detected current input gesture, wherein the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode.
A third aspect of the embodiments of the present invention provides a MR headset, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the virtual model operating method as provided in the first aspect of the embodiments of the present invention.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the virtual model operating method as provided by the first aspect of embodiments of the present invention.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be operated and controlled, and the convenience of user operation is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart of a first embodiment of a method for operating a virtual model of an MR headset according to an embodiment of the present invention;
FIG. 2 is a flowchart of a second embodiment of a method for operating a virtual model of an MR headset according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for operating a virtual model of an MR headset according to a third embodiment of the present invention;
FIG. 4 is a block diagram of an embodiment of a virtual model operating apparatus of an MR head according to the present invention;
FIG. 5 is a schematic diagram of an MR head according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The embodiment of the invention provides a virtual model operation method and device of an MR head display, a storage medium and the MR head display, which can simultaneously operate the whole virtual model under the condition that the virtual model is detached.
In various embodiments of the present invention, the MR head display may be any type of mixed reality head mounted display, such as the most representative HoloLens.
Referring to fig. 1, a first embodiment of a method for operating a virtual model of an MR head display according to the present invention includes:
101. detecting a currently input gesture;
first, a gesture currently input by a user is detected. Various gestures currently input by a user can be detected through the gesture detection device of the MR head display, wherein certain specified gestures are defined as operation gestures with corresponding functions. Such as the Bloom and Air tap gestures in HoloLens, which are used to open a menu interface, and the Air tap gesture is used to select an actionable object. It should be noted that the gesture in the embodiment of the present invention may be any customized gesture, and is not limited to the above Bloom and Air tap gestures.
102. And if the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture.
And after the currently input gesture is detected, judging whether the gesture is the gesture of the selected preset control model. And if the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture. And if the detected currently input gesture is not the gesture of selecting the control model, processing according to the operation rule corresponding to the gesture.
The target virtual model is a virtual model which a user wishes to operate in a virtual environment provided by the MR head display, and can be any type of detachable or non-detachable virtual model, such as an animal model, a plant model or an automobile model. The control model is also essentially a virtual model, which corresponds to the target virtual model and is placed close to the target virtual model in a predetermined manner. In the MR virtual environment, the control model is arranged close to the target virtual model, and a user can easily distinguish the control model from the background of the MR virtual environment and intuitively sense the corresponding relation between the control model and the target virtual model. Specifically, the manipulation model may be a virtual model with a certain size and transparency and in any shape and color, such as a virtual model formed by scaling the target virtual model, a semi-transparent spherical model or a tray-shaped model. The steering model may be placed in any orientation near the target virtual model, such as above or below the target virtual model.
The control model is a virtual model which can be selected, and if the control model is selected by detecting a gesture which is currently input, the whole target virtual model is determined as an operation object operated by the gesture which is currently input. Obviously, the manipulation model and the target virtual model establish a corresponding relationship, and selecting the manipulation model is equivalent to selecting the whole target virtual model as an operation object. The manipulation model can be selected through a designated gesture, such as an Air tap gesture commonly used in HoloLens or any other customized gesture. Therefore, even if the target virtual model is in a disassembled state, the user can still select the control model to realize the overall operation of the target virtual model, and the convenience of operation is greatly improved.
In a general MR application scenario, a user mainly clicks each function key set in an edge area of an MR display interface to implement various function operations for a target virtual model, and in this way, the user needs to spend a long time to find the designated function key, which is inefficient in operation. In the embodiment of the invention, the control model is used for replacing the function keys in the common MR application scene, so that the intuition is stronger and the visual effect is better. The control model is arranged close to the target virtual model, so that a user can perceive the corresponding relation between the control model and the target virtual model at first sight, and the specific position of the control model can be known, thereby effectively improving the operation efficiency. In addition, aiming at the application scene of selecting the whole appointed target virtual model from the multiple target virtual models, each virtual model in the multiple target virtual models is provided with a corresponding control model, and a user only needs to directly select the control model of the appointed target virtual model when selecting the whole appointed target virtual model, so that the operation is very convenient and fast.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be operated and controlled, and the convenience of user operation is greatly improved.
Referring to fig. 2, a second embodiment of a method for operating a virtual model of an MR head monitor according to the present invention includes:
201. detecting a currently input gesture;
step 201 is the same as step 101, and specific reference may be made to the related description of step 101.
202. If the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture;
the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode. Specifically, in the embodiment of the present invention, the manipulation model is in a tray shape and is placed close to the lower side of the target virtual model. The tray-shaped control model is similar to a plate in perception, bears an object (a target virtual model) on the plate, and can bring visual and vivid visual effects to a user.
The manipulation model may be selected using a generic selection gesture, but this may be less efficient in some cases. For example, for a target virtual model with a large number of detached components, since the manipulation model is placed close to the target virtual model, when the user selects the manipulation model using a general selection gesture, the user may easily select the components of the target virtual model by mistake, which greatly affects the operation efficiency and the user experience.
In order to improve the operation efficiency when the user selects the manipulation model, a non-generic selection gesture may be defined for the manipulation model. Considering that the manipulation model in this embodiment is in a tray shape, preferably, the lifting gesture with the palm of the hand upward is defined as a gesture for selecting the manipulation model, which is similar to the action of lifting a tray in daily life of a user, so that the user experience can be effectively improved.
Further, the step of determining whether the detected current input gesture selects a preset manipulation model includes:
(1) judging whether the currently input gesture is a lifting gesture with an upward palm;
(2) and if the currently input gesture is a lifting gesture with an upward palm, determining that the currently input gesture selects a preset control model.
And if the currently input gesture is a lifting gesture with an upward palm, determining that the currently input gesture selects the control model. The gesture for selecting the control model is distinguished from the universal gesture for selecting, when the user wants to select the control model, the control model can be directly selected by adopting the lifting gesture with the palm upward, and the problem of wrong selection cannot occur due to the fact that other controllable components cannot be selected through the lifting gesture, and the operation efficiency and the user experience can be effectively improved. Furthermore, even if a plurality of target virtual models and a plurality of corresponding control models exist in the MR virtual environment, most of operable objects can be filtered by adopting a non-universal selection gesture aiming at the control models, and a user only needs to select one or more control models expected by the user from the control models, so that the operation efficiency is improved to a great extent.
203. Determining a target function key, wherein the target function key is selected from more than one function keys respectively corresponding to different operation actions;
in this embodiment, the control model is provided with more than one function key respectively corresponding to different operation actions. The operation action comprises operations common to the operation object in the MR, such as moving, zooming, rotating and the like. The distribution positions of the function keys on the operation model can be reasonably determined according to the specific shape of the operation model, for example, for the shape of the tray in the embodiment, the function keys can be uniformly arranged on the tray body of the tray.
After the whole of the target virtual model is determined as the operation object operated by the currently input gesture, a target function key is determined, the target function key is a selected function key in the more than one function keys respectively corresponding to different operation actions, and one of the function keys can be selected as the target function key by a user through a specified gesture click.
204. And after the input operation gesture is acquired, enabling an operation object operated by the currently input gesture to execute an operation action corresponding to the target function key.
After a target function key is determined and an input operation gesture is acquired, an operation object operated by the currently input gesture executes an operation action corresponding to the target function key. The operation gesture is a preset gesture for completing a corresponding operation action, such as a gesture of dragging, turning, and the like. In this embodiment, the operation object operated by the currently input gesture is the whole of the target virtual model. If the user clicks and selects the function key for movement, that is, the function key for movement is determined to be the target function key, and the user makes a dragging gesture in the designated direction, the entire target virtual model moves in the designated direction, and the operation manner of rotation or zooming is similar to that of the above-mentioned moving example, and is not described herein again.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a tray-shaped virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode; determining a target function key, wherein the target function key is selected from more than one function keys respectively corresponding to different operation actions; and after the input operation gesture is acquired, enabling an operation object operated by the currently input gesture to execute an operation action corresponding to the target function key. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be moved, rotated or zoomed and the like, and the convenience of user operation is greatly improved.
Referring to fig. 3, a third embodiment of a method for operating a virtual model of an MR head monitor according to the present invention includes:
301. detecting a currently input gesture;
step 301 is the same as step 101, and specific reference may be made to the description related to step 101.
302. If the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture;
the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode. Specifically, in the embodiment of the present invention, the manipulation model is in a cage shape and surrounds the target virtual model. The control model of the cage shape is similar to a birdcage in perception, and the control model can give visual and visual effects to users by holding objects (target virtual models) in the cage.
The same as described in the second embodiment of the virtual model operation method provided by the present invention, the manipulation model can be selected by using a general selection gesture, but this method has low operation efficiency in some cases. For example, for a target virtual model with a large number of detached components, since the manipulation model is placed close to the target virtual model, when the user selects the manipulation model using a general selection gesture, the user may easily select the components of the target virtual model by mistake, which greatly affects the operation efficiency and the user experience.
In order to improve the operation efficiency when the user selects the manipulation model, a non-generic selection gesture may be defined for the manipulation model. Considering that the manipulation model in this embodiment is in a shape of a jail cage, the gripping gesture with open five fingers is preferably defined as a gesture for selecting the manipulation model, which is similar to the action of gripping the cage in daily life of a user, so that the user experience can be effectively improved.
Further, the step of determining whether the detected current input gesture selects a preset manipulation model includes:
(1) judging whether the currently input gesture is a grabbing gesture with five open fingers;
(2) and if the currently input gesture is a grabbing gesture with five open fingers, determining that the currently input gesture selects a preset control model.
And if the currently input gesture is a grabbing gesture with five open fingers, determining that the currently input gesture selects the control model. The gesture for selecting the control model is distinguished from the universal gesture for selecting, when the user wants to select the control model, the control model can be directly selected by adopting the grasping gesture with the five open fingers, and the problem of wrong selection cannot occur due to the fact that other controllable components cannot be selected through the grasping gesture, and the operation efficiency and the user experience can be effectively improved. Furthermore, even if a plurality of target virtual models and a plurality of corresponding control models exist in the MR virtual environment, most of operable objects can be filtered by adopting a non-universal selection gesture aiming at the control models, and a user only needs to select one or more control models expected by the user from the control models, so that the operation efficiency is improved to a great extent. It should be noted that, for the cage-like manipulation model, other selection gestures such as single-finger hooking may also be adopted, and no limitation is made herein.
303. Determining a current operation object and/or a current operation action corresponding to the current input gesture according to the detected current input gesture;
steps 303 to 305 are not limited to being performed after step 302, but may be performed at any time. The current operation object and/or the current operation action corresponding to the current input gesture are determined according to the detected current input gesture, the gesture detected in step 303 may be an operation gesture of selecting any operation object or any operation action, and the current operation object and/or operation action may be determined according to the gesture, such as the whole of the target virtual model, a detached part of the target virtual model, a background key in the MR environment, a moving operation or a rotating operation, and the like.
304. Determining prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture;
after determining a current operation object and/or a current operation action corresponding to the currently input gesture, determining prompt information corresponding to the current operation object and/or the current operation action. The prompt information is mainly related to the current operation object and/or the current operation action, and can be displayed in various forms such as text, pictures or animation. If the current operating object is a virtual dog model, the prompt message may be a caption about the dog; if the current operation action is moving, the prompt message can be two words of "move", and if the current operation action is rotating by operating a virtual dog model, the prompt message can be a descriptive word about the dog and two words of "move", and the like.
305. And displaying the prompt information in a designated area in a display interface of the MR head display.
After determining the prompt information, displaying the prompt information in a designated area in a display interface of the MR head display. The designated area may be any reasonable area in the display interface, for example, an area which is right of the current operation object and is not blocked by an interfering object (such as some virtual models). Obviously, when the user operates, the user can obtain the operation prompt according to the prompt information on one hand, so that the expected operation can be completed better; on the other hand, the relevant description of the current operation object can be better understood.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a cage-shaped virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode; determining a current operation object and/or a current operation action corresponding to the current input gesture according to the detected current input gesture; determining prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture; and displaying the prompt information in a designated area in a display interface of the MR head display. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be operated and controlled, and the convenience of user operation is greatly improved. On the other hand, the prompt information is displayed in the designated area in the display interface of the MR head display, so that the user can be effectively guided to operate and obtain the relevant description of the current operation object, and the user experience can be further improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The above mainly describes a virtual model operation method of an MR head monitor, and a virtual model operation apparatus of an MR head monitor will be described in detail below.
Referring to fig. 4, an embodiment of a virtual model operating apparatus for an MR head monitor according to the present invention includes:
a gesture detection module 401, configured to detect a currently input gesture;
a model selecting module 402, configured to determine, if a preset manipulation model is selected by a detected currently input gesture, an entire target virtual model as an operation object operated by the currently input gesture, where the manipulation model is a virtual model that corresponds to the target virtual model and is placed close to the target virtual model in a preset manner.
Further, the control model is provided with more than one function key corresponding to different operation actions, and the virtual model operating device may further include:
a function key determining module 403, configured to determine a target function key, where the target function key is a selected function key of the more than one function keys respectively corresponding to different operation actions;
and an action executing module 404, configured to, after the input operation gesture is obtained, enable the current operation object to execute an operation action corresponding to the target function key.
Further, the virtual model operating device may further include:
an operation object determining module 405, configured to determine, according to the detected currently input gesture, a current operation object and/or a current operation action corresponding to the currently input gesture;
a prompt information determining module 406, configured to determine prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture;
a display module 407, configured to display the prompt information in a specified area in a display interface of the MR head display.
An embodiment of the present invention further provides an MR head display, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of any one of the virtual model operation methods as shown in fig. 1 to 3 when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the virtual model operation methods shown in fig. 1 to 3.
FIG. 5 is a schematic diagram of an MR head according to an embodiment of the present invention. As shown in FIG. 5, the MR head 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the embodiments of the virtual model operation methods described above, such as the steps 101 to 102 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 401 to 402 shown in fig. 4.
The computer program 52 may be divided into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the MR headset 5.
The MR head 5 can include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that FIG. 5 is merely an example of the MR head 5, and does not constitute a limitation of the MR head 5, and may include more or less components than shown, or some components in combination, or different components, e.g., the MR head 5 may also include various input and output devices, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the MR head 5, such as a hard disk or a memory of the MR head 5. The memory 51 may also be an external storage device of the MR head display 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the MR head display 5. Further, the memory 51 may also comprise both an internal storage unit and an external storage device of the MR head 5. The memory 51 is used for storing the computer program and other programs and data required by the MR head. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A method of operating a virtual model of an MR head display, comprising:
detecting a currently input gesture;
if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode;
the target virtual model is a detachable or decomposable virtual model, the control model is provided with more than one functional key corresponding to different operation actions, and the method further comprises the following steps after the target virtual model is wholly determined as an operation object operated by the currently input gesture:
determining a target function key, wherein the target function key is a selected function key in the more than one function keys respectively corresponding to different operation actions;
and after the input operation gesture is acquired, enabling an operation object operated by the currently input gesture to execute an operation action corresponding to the target function key.
2. The method of operating a virtual model of a MR head display of claim 1 wherein the steering model is tray shaped and placed close to the underside of the target virtual model;
the step of judging whether the detected current input gesture selects a preset control model or not comprises the following steps:
judging whether the currently input gesture is a lifting gesture with an upward palm;
and if the currently input gesture is a lifting gesture with an upward palm, determining that the currently input gesture selects a preset control model.
3. The method of operating a virtual model of a MR headset of claim 1, wherein the steering model is in the shape of a cage and surrounds the target virtual model;
the step of judging whether the detected current input gesture selects a preset control model or not comprises the following steps:
judging whether the currently input gesture is a grabbing gesture with five open fingers;
and if the currently input gesture is a grabbing gesture with five open fingers, determining that the currently input gesture selects a preset control model.
4. The method of operating a virtual model of a MR headset according to any of claims 1 to 3, further comprising:
determining a current operation object and/or a current operation action corresponding to the current input gesture according to the detected current input gesture;
determining prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture;
and displaying the prompt information in a designated area in a display interface of the MR head display.
5. A virtual model manipulator of an MR head display, comprising:
the gesture detection module is used for detecting a currently input gesture;
the model selection module is used for determining the whole target virtual model as an operation object operated by the currently input gesture if the detected currently input gesture selects a preset control model, wherein the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode;
wherein, the virtual model of target is for dismantling or decomposable virtual model, it is provided with more than one function button that corresponds to different operation actions respectively to control on the model, virtual model operating means still includes:
the function key determining module is used for determining a target function key, wherein the target function key is a selected function key in the more than one function keys respectively corresponding to different operation actions;
and the action execution module is used for enabling the current operation object to execute the operation action corresponding to the target function key after the input operation gesture is acquired.
6. The virtual model manipulation device of an MR headset according to claim 5, further comprising:
the operation object determining module is used for determining a current operation object and/or a current operation action corresponding to the current input gesture according to the detected current input gesture;
the prompt information determining module is used for determining prompt information according to the current operation object and/or the current operation action corresponding to the currently input gesture;
and the display module is used for displaying the prompt information in a designated area in a display interface of the MR head display.
7. MR headset comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, carries out the steps of the virtual model operating method according to one of claims 1 to 4.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of operating a virtual model according to any one of claims 1 to 4.
CN201710576742.2A 2017-07-14 2017-07-14 Virtual model operation method and device of MR head display, storage medium and MR head display Active CN107463252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710576742.2A CN107463252B (en) 2017-07-14 2017-07-14 Virtual model operation method and device of MR head display, storage medium and MR head display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710576742.2A CN107463252B (en) 2017-07-14 2017-07-14 Virtual model operation method and device of MR head display, storage medium and MR head display

Publications (2)

Publication Number Publication Date
CN107463252A CN107463252A (en) 2017-12-12
CN107463252B true CN107463252B (en) 2020-08-21

Family

ID=60546685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710576742.2A Active CN107463252B (en) 2017-07-14 2017-07-14 Virtual model operation method and device of MR head display, storage medium and MR head display

Country Status (1)

Country Link
CN (1) CN107463252B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109696961A (en) * 2018-12-29 2019-04-30 广州欧科信息技术股份有限公司 Historical relic machine & equipment based on VR technology leads reward and realizes system and method, medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1703703A (en) * 2002-11-14 2005-11-30 阿尔斯通铁路公开有限公司 Device and method for checking railway logical software engines for commanding plants, particularly station plants
CN105844705A (en) * 2016-03-29 2016-08-10 联想(北京)有限公司 Three-dimensional virtual object model generation method and electronic device
CN105912232A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1703703A (en) * 2002-11-14 2005-11-30 阿尔斯通铁路公开有限公司 Device and method for checking railway logical software engines for commanding plants, particularly station plants
CN105844705A (en) * 2016-03-29 2016-08-10 联想(北京)有限公司 Three-dimensional virtual object model generation method and electronic device
CN105912232A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic equipment

Also Published As

Publication number Publication date
CN107463252A (en) 2017-12-12

Similar Documents

Publication Publication Date Title
US11782511B2 (en) Tactile glove for human-computer interaction
CN107111423B (en) Selecting actionable items in a graphical user interface of a mobile computer system
US10366602B2 (en) Interactive multi-touch remote control
US10191612B2 (en) Three-dimensional virtualization
JP6202810B2 (en) Gesture recognition apparatus and method, and program
CN109791468A (en) User interface for both hands control
CN107615310A (en) Message processing device
WO2012177322A1 (en) Gesture-controlled technique to expand interaction radius in computer vision applications
CN106464749B (en) Interactive method of user interface
EP2180400A2 (en) Image processing apparatus, image processing method, and program
AU2014240935B2 (en) Active feedback interface for touch screen display
EP3484670A1 (en) Touch screen testing platform for engaging a dynamically positioned target feature
TWI668600B (en) Method, device, and non-transitory computer readable storage medium for virtual reality or augmented reality
EP3046010A1 (en) System and method for guarding emergency and critical touch targets
CN107463252B (en) Virtual model operation method and device of MR head display, storage medium and MR head display
CN105278751A (en) Method and apparatus for implementing human-computer interaction, and protective case
JP2015049773A (en) Object operation system, object operation control program and object operation control method
CN114089884A (en) Desktop editing method and electronic equipment
US10073612B1 (en) Fixed cursor input interface for a computer aided design application executing on a touch screen device
CN109324748B (en) Equipment control method, electronic equipment and storage medium
CN112818825B (en) Working state determining method and device
CN108845740B (en) The implementation method of E-book reader operation mode, electronic equipment
CN105117133A (en) Touch screen multi-choice operation control method and system
JP6677019B2 (en) Information processing apparatus, information processing program, and information processing method
JP2016110329A (en) Display input device and display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 12th Floor, Building A4, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Guotaian Educational Technology Co., Ltd.

Address before: 518000 Checkpoint, Nantou, Shenzhen, Guangdong Province, 30 Building 3, Zhiheng Industrial Park, Gate 2, Nanshan District, Shenzhen

Applicant before: GTA INFORMATION TECHNOLOGY CO., LTD. (GTA)

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211216

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee after: Shenzhen guotengan Vocational Education Technology Co.,Ltd.

Address before: 518000 12th Floor, Building A4, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN GTA EDUCATION TECH Ltd.

TR01 Transfer of patent right