Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The embodiment of the invention provides a virtual model operation method and device of an MR head display, a storage medium and the MR head display, which can simultaneously operate the whole virtual model under the condition that the virtual model is detached.
In various embodiments of the present invention, the MR head display may be any type of mixed reality head mounted display, such as the most representative HoloLens.
Referring to fig. 1, a first embodiment of a method for operating a virtual model of an MR head display according to the present invention includes:
101. detecting a currently input gesture;
first, a gesture currently input by a user is detected. Various gestures currently input by a user can be detected through the gesture detection device of the MR head display, wherein certain specified gestures are defined as operation gestures with corresponding functions. Such as the Bloom and Air tap gestures in HoloLens, which are used to open a menu interface, and the Air tap gesture is used to select an actionable object. It should be noted that the gesture in the embodiment of the present invention may be any customized gesture, and is not limited to the above Bloom and Air tap gestures.
102. And if the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture.
And after the currently input gesture is detected, judging whether the gesture is the gesture of the selected preset control model. And if the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture. And if the detected currently input gesture is not the gesture of selecting the control model, processing according to the operation rule corresponding to the gesture.
The target virtual model is a virtual model which a user wishes to operate in a virtual environment provided by the MR head display, and can be any type of detachable or non-detachable virtual model, such as an animal model, a plant model or an automobile model. The control model is also essentially a virtual model, which corresponds to the target virtual model and is placed close to the target virtual model in a predetermined manner. In the MR virtual environment, the control model is arranged close to the target virtual model, and a user can easily distinguish the control model from the background of the MR virtual environment and intuitively sense the corresponding relation between the control model and the target virtual model. Specifically, the manipulation model may be a virtual model with a certain size and transparency and in any shape and color, such as a virtual model formed by scaling the target virtual model, a semi-transparent spherical model or a tray-shaped model. The steering model may be placed in any orientation near the target virtual model, such as above or below the target virtual model.
The control model is a virtual model which can be selected, and if the control model is selected by detecting a gesture which is currently input, the whole target virtual model is determined as an operation object operated by the gesture which is currently input. Obviously, the manipulation model and the target virtual model establish a corresponding relationship, and selecting the manipulation model is equivalent to selecting the whole target virtual model as an operation object. The manipulation model can be selected through a designated gesture, such as an Air tap gesture commonly used in HoloLens or any other customized gesture. Therefore, even if the target virtual model is in a disassembled state, the user can still select the control model to realize the overall operation of the target virtual model, and the convenience of operation is greatly improved.
In a general MR application scenario, a user mainly clicks each function key set in an edge area of an MR display interface to implement various function operations for a target virtual model, and in this way, the user needs to spend a long time to find the designated function key, which is inefficient in operation. In the embodiment of the invention, the control model is used for replacing the function keys in the common MR application scene, so that the intuition is stronger and the visual effect is better. The control model is arranged close to the target virtual model, so that a user can perceive the corresponding relation between the control model and the target virtual model at first sight, and the specific position of the control model can be known, thereby effectively improving the operation efficiency. In addition, aiming at the application scene of selecting the whole appointed target virtual model from the multiple target virtual models, each virtual model in the multiple target virtual models is provided with a corresponding control model, and a user only needs to directly select the control model of the appointed target virtual model when selecting the whole appointed target virtual model, so that the operation is very convenient and fast.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be operated and controlled, and the convenience of user operation is greatly improved.
Referring to fig. 2, a second embodiment of a method for operating a virtual model of an MR head monitor according to the present invention includes:
201. detecting a currently input gesture;
step 201 is the same as step 101, and specific reference may be made to the related description of step 101.
202. If the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture;
the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode. Specifically, in the embodiment of the present invention, the manipulation model is in a tray shape and is placed close to the lower side of the target virtual model. The tray-shaped control model is similar to a plate in perception, bears an object (a target virtual model) on the plate, and can bring visual and vivid visual effects to a user.
The manipulation model may be selected using a generic selection gesture, but this may be less efficient in some cases. For example, for a target virtual model with a large number of detached components, since the manipulation model is placed close to the target virtual model, when the user selects the manipulation model using a general selection gesture, the user may easily select the components of the target virtual model by mistake, which greatly affects the operation efficiency and the user experience.
In order to improve the operation efficiency when the user selects the manipulation model, a non-generic selection gesture may be defined for the manipulation model. Considering that the manipulation model in this embodiment is in a tray shape, preferably, the lifting gesture with the palm of the hand upward is defined as a gesture for selecting the manipulation model, which is similar to the action of lifting a tray in daily life of a user, so that the user experience can be effectively improved.
Further, the step of determining whether the detected current input gesture selects a preset manipulation model includes:
(1) judging whether the currently input gesture is a lifting gesture with an upward palm;
(2) and if the currently input gesture is a lifting gesture with an upward palm, determining that the currently input gesture selects a preset control model.
And if the currently input gesture is a lifting gesture with an upward palm, determining that the currently input gesture selects the control model. The gesture for selecting the control model is distinguished from the universal gesture for selecting, when the user wants to select the control model, the control model can be directly selected by adopting the lifting gesture with the palm upward, and the problem of wrong selection cannot occur due to the fact that other controllable components cannot be selected through the lifting gesture, and the operation efficiency and the user experience can be effectively improved. Furthermore, even if a plurality of target virtual models and a plurality of corresponding control models exist in the MR virtual environment, most of operable objects can be filtered by adopting a non-universal selection gesture aiming at the control models, and a user only needs to select one or more control models expected by the user from the control models, so that the operation efficiency is improved to a great extent.
203. Determining a target function key, wherein the target function key is selected from more than one function keys respectively corresponding to different operation actions;
in this embodiment, the control model is provided with more than one function key respectively corresponding to different operation actions. The operation action comprises operations common to the operation object in the MR, such as moving, zooming, rotating and the like. The distribution positions of the function keys on the operation model can be reasonably determined according to the specific shape of the operation model, for example, for the shape of the tray in the embodiment, the function keys can be uniformly arranged on the tray body of the tray.
After the whole of the target virtual model is determined as the operation object operated by the currently input gesture, a target function key is determined, the target function key is a selected function key in the more than one function keys respectively corresponding to different operation actions, and one of the function keys can be selected as the target function key by a user through a specified gesture click.
204. And after the input operation gesture is acquired, enabling an operation object operated by the currently input gesture to execute an operation action corresponding to the target function key.
After a target function key is determined and an input operation gesture is acquired, an operation object operated by the currently input gesture executes an operation action corresponding to the target function key. The operation gesture is a preset gesture for completing a corresponding operation action, such as a gesture of dragging, turning, and the like. In this embodiment, the operation object operated by the currently input gesture is the whole of the target virtual model. If the user clicks and selects the function key for movement, that is, the function key for movement is determined to be the target function key, and the user makes a dragging gesture in the designated direction, the entire target virtual model moves in the designated direction, and the operation manner of rotation or zooming is similar to that of the above-mentioned moving example, and is not described herein again.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a tray-shaped virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode; determining a target function key, wherein the target function key is selected from more than one function keys respectively corresponding to different operation actions; and after the input operation gesture is acquired, enabling an operation object operated by the currently input gesture to execute an operation action corresponding to the target function key. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be moved, rotated or zoomed and the like, and the convenience of user operation is greatly improved.
Referring to fig. 3, a third embodiment of a method for operating a virtual model of an MR head monitor according to the present invention includes:
301. detecting a currently input gesture;
step 301 is the same as step 101, and specific reference may be made to the description related to step 101.
302. If the detected currently input gesture selects a preset control model, determining the whole target virtual model as an operation object operated by the currently input gesture;
the control model is a virtual model which corresponds to the target virtual model and is placed close to the target virtual model in a preset mode. Specifically, in the embodiment of the present invention, the manipulation model is in a cage shape and surrounds the target virtual model. The control model of the cage shape is similar to a birdcage in perception, and the control model can give visual and visual effects to users by holding objects (target virtual models) in the cage.
The same as described in the second embodiment of the virtual model operation method provided by the present invention, the manipulation model can be selected by using a general selection gesture, but this method has low operation efficiency in some cases. For example, for a target virtual model with a large number of detached components, since the manipulation model is placed close to the target virtual model, when the user selects the manipulation model using a general selection gesture, the user may easily select the components of the target virtual model by mistake, which greatly affects the operation efficiency and the user experience.
In order to improve the operation efficiency when the user selects the manipulation model, a non-generic selection gesture may be defined for the manipulation model. Considering that the manipulation model in this embodiment is in a shape of a jail cage, the gripping gesture with open five fingers is preferably defined as a gesture for selecting the manipulation model, which is similar to the action of gripping the cage in daily life of a user, so that the user experience can be effectively improved.
Further, the step of determining whether the detected current input gesture selects a preset manipulation model includes:
(1) judging whether the currently input gesture is a grabbing gesture with five open fingers;
(2) and if the currently input gesture is a grabbing gesture with five open fingers, determining that the currently input gesture selects a preset control model.
And if the currently input gesture is a grabbing gesture with five open fingers, determining that the currently input gesture selects the control model. The gesture for selecting the control model is distinguished from the universal gesture for selecting, when the user wants to select the control model, the control model can be directly selected by adopting the grasping gesture with the five open fingers, and the problem of wrong selection cannot occur due to the fact that other controllable components cannot be selected through the grasping gesture, and the operation efficiency and the user experience can be effectively improved. Furthermore, even if a plurality of target virtual models and a plurality of corresponding control models exist in the MR virtual environment, most of operable objects can be filtered by adopting a non-universal selection gesture aiming at the control models, and a user only needs to select one or more control models expected by the user from the control models, so that the operation efficiency is improved to a great extent. It should be noted that, for the cage-like manipulation model, other selection gestures such as single-finger hooking may also be adopted, and no limitation is made herein.
303. Determining a current operation object and/or a current operation action corresponding to the current input gesture according to the detected current input gesture;
steps 303 to 305 are not limited to being performed after step 302, but may be performed at any time. The current operation object and/or the current operation action corresponding to the current input gesture are determined according to the detected current input gesture, the gesture detected in step 303 may be an operation gesture of selecting any operation object or any operation action, and the current operation object and/or operation action may be determined according to the gesture, such as the whole of the target virtual model, a detached part of the target virtual model, a background key in the MR environment, a moving operation or a rotating operation, and the like.
304. Determining prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture;
after determining a current operation object and/or a current operation action corresponding to the currently input gesture, determining prompt information corresponding to the current operation object and/or the current operation action. The prompt information is mainly related to the current operation object and/or the current operation action, and can be displayed in various forms such as text, pictures or animation. If the current operating object is a virtual dog model, the prompt message may be a caption about the dog; if the current operation action is moving, the prompt message can be two words of "move", and if the current operation action is rotating by operating a virtual dog model, the prompt message can be a descriptive word about the dog and two words of "move", and the like.
305. And displaying the prompt information in a designated area in a display interface of the MR head display.
After determining the prompt information, displaying the prompt information in a designated area in a display interface of the MR head display. The designated area may be any reasonable area in the display interface, for example, an area which is right of the current operation object and is not blocked by an interfering object (such as some virtual models). Obviously, when the user operates, the user can obtain the operation prompt according to the prompt information on one hand, so that the expected operation can be completed better; on the other hand, the relevant description of the current operation object can be better understood.
In the embodiment of the invention, the gesture input currently is detected; if the detected gesture input currently selects a preset control model, determining the whole target virtual model as an operation object operated by the gesture input currently, wherein the control model is a cage-shaped virtual model which corresponds to the target virtual model and is arranged close to the target virtual model in a preset mode; determining a current operation object and/or a current operation action corresponding to the current input gesture according to the detected current input gesture; determining prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture; and displaying the prompt information in a designated area in a display interface of the MR head display. In the process, even if the target virtual model is in a detached state, a user can input a gesture to select the control model corresponding to the target virtual model, and then the whole target virtual model is determined as an operation object operated by the currently input gesture, so that the whole target virtual model can be operated and controlled, and the convenience of user operation is greatly improved. On the other hand, the prompt information is displayed in the designated area in the display interface of the MR head display, so that the user can be effectively guided to operate and obtain the relevant description of the current operation object, and the user experience can be further improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The above mainly describes a virtual model operation method of an MR head monitor, and a virtual model operation apparatus of an MR head monitor will be described in detail below.
Referring to fig. 4, an embodiment of a virtual model operating apparatus for an MR head monitor according to the present invention includes:
a gesture detection module 401, configured to detect a currently input gesture;
a model selecting module 402, configured to determine, if a preset manipulation model is selected by a detected currently input gesture, an entire target virtual model as an operation object operated by the currently input gesture, where the manipulation model is a virtual model that corresponds to the target virtual model and is placed close to the target virtual model in a preset manner.
Further, the control model is provided with more than one function key corresponding to different operation actions, and the virtual model operating device may further include:
a function key determining module 403, configured to determine a target function key, where the target function key is a selected function key of the more than one function keys respectively corresponding to different operation actions;
and an action executing module 404, configured to, after the input operation gesture is obtained, enable the current operation object to execute an operation action corresponding to the target function key.
Further, the virtual model operating device may further include:
an operation object determining module 405, configured to determine, according to the detected currently input gesture, a current operation object and/or a current operation action corresponding to the currently input gesture;
a prompt information determining module 406, configured to determine prompt information according to a current operation object and/or a current operation action corresponding to the currently input gesture;
a display module 407, configured to display the prompt information in a specified area in a display interface of the MR head display.
An embodiment of the present invention further provides an MR head display, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of any one of the virtual model operation methods as shown in fig. 1 to 3 when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the virtual model operation methods shown in fig. 1 to 3.
FIG. 5 is a schematic diagram of an MR head according to an embodiment of the present invention. As shown in FIG. 5, the MR head 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the embodiments of the virtual model operation methods described above, such as the steps 101 to 102 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 401 to 402 shown in fig. 4.
The computer program 52 may be divided into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the MR headset 5.
The MR head 5 can include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that FIG. 5 is merely an example of the MR head 5, and does not constitute a limitation of the MR head 5, and may include more or less components than shown, or some components in combination, or different components, e.g., the MR head 5 may also include various input and output devices, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the MR head 5, such as a hard disk or a memory of the MR head 5. The memory 51 may also be an external storage device of the MR head display 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the MR head display 5. Further, the memory 51 may also comprise both an internal storage unit and an external storage device of the MR head 5. The memory 51 is used for storing the computer program and other programs and data required by the MR head. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.