CN111352505B - Operation control method, head-mounted device, and medium - Google Patents

Operation control method, head-mounted device, and medium Download PDF

Info

Publication number
CN111352505B
CN111352505B CN202010031306.9A CN202010031306A CN111352505B CN 111352505 B CN111352505 B CN 111352505B CN 202010031306 A CN202010031306 A CN 202010031306A CN 111352505 B CN111352505 B CN 111352505B
Authority
CN
China
Prior art keywords
virtual
input
sub
virtual sub
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010031306.9A
Other languages
Chinese (zh)
Other versions
CN111352505A (en
Inventor
赵磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010031306.9A priority Critical patent/CN111352505B/en
Publication of CN111352505A publication Critical patent/CN111352505A/en
Application granted granted Critical
Publication of CN111352505B publication Critical patent/CN111352505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention discloses operation control, head-mounted equipment and a medium, relates to the technical field of communication, and can solve the problems of complicated control operation and inconvenient operation of an object in the prior art. The method comprises the following steps: receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; wherein the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different objects, and N is a positive integer.

Description

Operation control method, head-mounted device, and medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an operation control method, a head-mounted device and a medium.
Background
With the continuous development of internet technology and electronic devices, the kinds and the number of programs are increasing. When a user uses the electronic equipment, if objects such as texts, pictures, application programs and the like need to be controlled, the user needs to switch and find the objects back and forth among different pages or different programs of the electronic equipment, and a large number of operations such as clicking and scratching can be performed on a screen through fingers, so that the process is complicated and the operation is inconvenient.
Disclosure of Invention
The embodiment of the invention provides an operation control method, which can solve the problems of more complicated control operation and inconvenient operation of an object in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an operation control method, including:
receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen;
in response to the first input, performing a first control operation, the first control operation being associated with the first input;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different objects, and N is a positive integer.
In a second aspect, an embodiment of the present invention provides a head-mounted device, including:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of a user on a first virtual sub-object of a virtual object displayed on a virtual screen;
a first processing module for performing a first control operation in response to the first input, the first control operation being associated with the first input;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different first objects, and N is a positive integer.
In a third aspect, an embodiment of the present invention provides a head-mounted device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the method according to the first aspect.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different objects, N is a positive integer, that is, by receiving a first input of a user, a first control operation associated with the first input can be executed, switching and finding back and forth between different pages of the electronic device can be avoided, and the operation is simple and convenient.
Drawings
FIG. 1 is a flow chart of an operation control method according to an embodiment of the present invention;
fig. 2 is one of schematic diagrams of rotating P virtual sub-objects according to an operation control method provided in an embodiment of the present invention;
fig. 3 is a schematic diagram of associating different surfaces of a first virtual sub-object with different objects in an operation control method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of moving a first virtual sub-object in an operation control method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of splitting a virtual object in an operation control method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of recombining N virtual sub-objects according to an operation control method provided in an embodiment of the present invention;
fig. 7 is a schematic diagram of splitting a first virtual object in an operation control method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a reorganization of a second virtual sub-object according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating an operation control method according to an embodiment of the present invention for updating a display size of a first virtual sub-object;
fig. 10 is a schematic diagram illustrating a first virtual sub-object displayed in the operation control method according to the embodiment of the present invention;
FIG. 11 is a diagram illustrating a first virtual sub-object moving operation of the operation control method according to the embodiment of the present invention;
FIG. 12 is a second schematic diagram illustrating the rotation of the first virtual sub-object according to the operation control method provided in the embodiment of the present invention;
fig. 13 is a schematic diagram of a display identifier of an operation control method according to an embodiment of the present invention;
fig. 14 is a schematic diagram illustrating a display position of a setting virtual object according to an operation control method provided in an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention;
fig. 16 is a hardware schematic diagram of a head-mounted device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural means two or more elements, and the like.
The embodiment of the invention provides an operation control method, which comprises the steps of receiving first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different objects, and N positive integers are associated with the different surfaces.
Virtual Reality (VR) technology is a computer simulation system technology that creates and experiences a Virtual world. It uses computer to generate a simulated environment, and uses the interactive three-dimensional dynamic visual and entity action system simulation of multi-source information fusion to immerse the user in the environment.
Augmented Reality (AR) technology is a technology that integrates real world information and virtual world information, and virtual information content is superimposed in the real world through various sensing devices, so that real world content and virtual information content can be simultaneously embodied in the same picture and space, and natural interaction between a user and a virtual environment is realized.
The AR glasses move the imaging system to a place outside the glasses lens through optical imaging elements such as optical waveguides and the like, so that the imaging system is prevented from blocking external sight. The optical waveguide is a high-transmittance medium similar to an optical fiber for guiding light waves to propagate in the optical waveguide, light output by an imaging system and reflected light of a real scene are integrated and transmitted to human eyes, and hand image information acquired by a camera is processed and analyzed by using a computer vision algorithm, so that hand tracking and recognition can be realized.
Mixed Reality (MR), combining virtual information with a view of the real world, or adding a virtual representation of a real world object to a virtual environment.
Head-mounted devices in embodiments of the invention may include, but are not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, and the like.
According to the related art, various head-mounted devices may sense a direction of acceleration, angular acceleration, or inclination, and display a screen corresponding to the sensed information. The head mounted device may change and display the screen based on the user's movement.
It should be noted that, in the embodiment of the present invention, the first head-mounted device and the second head-mounted device may be the same head-mounted device (for example, both are AR glasses), or may be different head-mounted devices (for example, the first head-mounted device is AR glasses, and the second head-mounted device is a mobile phone), which is not limited in this embodiment of the present invention.
The virtual screen in the embodiment of the invention is a virtual reality screen, an augmented reality screen or a mixed reality screen of the head-mounted equipment.
The virtual screen in the embodiment of the present invention may be any carrier that can be used to display content projected by a projection device when content is displayed by using AR technology. The projection device may be a projection device using AR technology, such as a head-mounted device or an AR device in the embodiment of the present invention.
When the AR technology is used to display content on the virtual screen, the projection device may project a virtual scene acquired by the projection device (or internally integrated), or a virtual scene and a real scene onto the virtual screen, so that the virtual screen may display the content, thereby showing the effect of superimposing the real scene and the virtual scene to the user.
In connection with different scenarios of AR technology applications, the virtual screen may generally be a display screen of an electronic device (e.g. a mobile phone), a lens of AR glasses, a windshield of an automobile, a wall of a room, etc. any possible carrier.
The following describes an exemplary process of displaying content on a virtual screen by using AR technology, by taking the virtual screen as a display screen of an electronic device, a lens of AR glasses, and a windshield of an automobile as examples.
In one example, when the virtual screen is a display screen of an electronic device, the projection device may be the electronic device. The electronic equipment can acquire a real scene in the area where the electronic equipment is located through the camera of the electronic equipment, the real scene is displayed on the display screen of the electronic equipment, then the electronic equipment can project a virtual scene acquired by the electronic equipment (or internally integrated) onto the display screen of the electronic equipment, so that the virtual scene can be displayed in a superposition mode in the real scene, and a user can see the effect of the real scene and the virtual scene after superposition through the display screen of the electronic equipment.
In another example, when the virtual screen is a lens of AR glasses, the projection device may be the AR glasses. When the user wears the glasses, the user can see the real scene in the area where the user is located through the lenses of the AR glasses, and the AR glasses can project the acquired (or internally integrated) virtual scene onto the lenses of the AR glasses, so that the user can see the display effect of the real scene and the virtual scene after superposition through the lenses of the AR glasses.
In yet another example, when the virtual screen is a windshield of an automobile, the projection device may be any electronic device. When the user is located in the automobile, the user can see the real scene in the area where the user is located through the windshield of the automobile, and the projection device can project the acquired (or internally integrated) virtual scene onto the windshield of the automobile, so that the user can see the display effect of the real scene and the virtual scene after superposition through the windshield of the automobile.
Of course, in the embodiment of the present invention, the specific form of the virtual screen may not be limited, for example, it may be a non-carrier real space. In this case, when the user is located in the real space, the user can directly see the real scene in the real space, and the projection device can project the acquired (or internally integrated) virtual scene into the real space, so that the user can see the display effect of the real scene and the virtual scene after superposition in the real space.
The virtual object in the embodiment of the present invention is an object in virtual information, and optionally, the virtual object is content displayed on a screen or a lens of the head-mounted device, which corresponds to the surrounding environment the user is viewing, but is not present as a physical embodiment outside the display.
The virtual object may be an AR object. It should be noted that the AR object may be understood as: the AR device analyzes the real object to obtain feature information of the real object (e.g., type information of the real object, appearance information (e.g., structure, color, shape, etc.) of the real object, and position information of the real object in space, etc.), and constructs an AR model in the AR device according to the feature information.
Optionally, in this embodiment of the present invention, the target virtual object may specifically be a virtual image, a virtual pattern, a virtual character, a virtual picture, or the like.
The head-mounted device in the embodiment of the invention can be a head-mounted device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not limited in the embodiments of the present invention.
An execution main body of the operation control method provided in the embodiment of the present invention may be the head-mounted device, or may also be a functional module and/or a functional entity capable of implementing the method in the head-mounted device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a head-mounted device as an example to exemplarily explain an operation control method provided by an embodiment of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an operation control method applied to a head-mounted device, which may include steps 101 to 102 described below.
Step 101, receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen.
Optionally, the first input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be set specifically according to an actual need, and the embodiment of the present invention is not limited. When the first input is executed, the first input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The first input may also be a first operation.
102, responding to the first input, and executing a first control operation, wherein the first control operation is associated with the first input;
wherein the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different objects, and N is a positive integer.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different objects, N is a positive integer, switching and finding back and forth between different pages of the electronic equipment can be avoided, and the operation is simple and convenient.
Optionally, the method further comprises: the N virtual sub-objects are separated by a separation identifier.
Optionally, the separation mark may be an opaque line, where opaque means that the transparency of the line is less than 100%, or the separation mark is a gap with a certain width, and the like.
Optionally, the first face of the first virtual sub-object is oriented in the same direction as the virtual screen;
step 102, in response to the first input, performs a first control operation comprising:
in response to the first input, rotating the P virtual sub-objects such that second faces of the P virtual sub-objects face the user; therefore, the user can rotate the P virtual sub-objects through the first input of the first virtual sub-object, so that the second surfaces of the P virtual sub-objects face the user, and the operation of the user on the second surfaces is facilitated next.
Wherein P is a positive integer and is less than or equal to N.
Optionally, in response to the first input, the P virtual sub-objects are rotated, only the first virtual sub-object may be rotated, the N virtual sub-objects may be rotated, or the P virtual sub-objects associated with the first virtual sub-object may be rotated, for example, the P virtual sub-objects may be of the same type as the first virtual sub-object, the P virtual sub-objects may be of the opposite type to the first virtual sub-object, and of course, any P virtual sub-objects may also be randomly rotated, which is not limited in this embodiment of the present invention. It should be noted that the P virtual sub-objects may rotate in the same manner or in different manners, for example, P is 2, and when the first virtual sub-object and the second virtual sub-object rotate, the first virtual sub-object may rotate to the left once, and the second virtual sub-object may rotate to the left once; it is also possible that the first virtual sub-object is rotated once to the left and the second virtual sub-object is rotated once to the right.
Alternatively, as shown in fig. 2, the virtual object 201 displayed on the virtual screen is a three-dimensional virtual object, the virtual object 201 includes 9 virtual sub-objects, the virtual sub-objects include 6 planes, and different planes may be associated with different objects. Illustratively, as shown in FIG. 3, faces 301, 302, and 303 in the virtual sub-object 6 face may be associated with app1, app2, and app3, respectively. A first input by a user to the first virtual sub-object 203 is received. In response to the first input, the P virtual sub-objects may be rotated, for example, when P is 1, the first virtual sub-object may be rotated, when P is 9, the 9 virtual sub-objects may be rotated such that the second side of the P virtual sub-objects faces the user, and when P is 3, the 3 virtual sub-objects of the first row may be rotated. If the first virtual sub-object is a music program, the user slides the first virtual sub-object, may rotate virtual sub-objects of all music programs in the virtual object, or may rotate any virtual sub-object associated with the first virtual sub-object.
Illustratively, as shown in FIG. 12, the user may place the hand at the first sub-object and pinch the index finger and thumb together, placing in the first virtual sub-object, and the border of the first virtual sub-object may remain highlighted, prompting the user that the first virtual sub-object has been controlled on this hand at this time. The user can rotate the first sub-object in any direction using the left index finger while controlling the first virtual sub-object with the right hand, so that the first sub-object rotates in the direction of the left index finger rotation. Of course, a single-handed operation is also possible, such as a user rotating the first virtual sub-object with a single hand, and such as a user sliding left with a single hand, the first virtual sub-object may be rotated left.
Alternatively, rotating the first virtual sub-object may trigger a control operation on an object associated with the first virtual sub-object.
Optionally, step 101 comprises: receiving a first input of a user to a third face of a first virtual sub-object of a virtual object displayed on a virtual screen;
step 102 comprises: in response to the first input, moving the first virtual sub-object from the first position of the virtual object to a first spatial position and performing the first control operation on the first object associated with the third facet. In this way, through the first input of the user, the control operation can be simply and quickly performed on the object associated with the first virtual sub-object.
Alternatively, as shown in FIG. 4, a first input from a user to a third facet of the first virtual sub-object is received, and in response to the first input, the first virtual sub-object 203 can be moved from a first position of the virtual object, such as FIG. 4 (a), to a first spatial position, such as FIG. 4 (b), to perform a first control operation on the first object associated with the third facet.
Exemplarily, the first object associated with the third surface is an application a, and after the user moves the first virtual sub-object from the first position to the first spatial position through the first input, a target interface of the application a may be displayed, where the target interface may be a main interface, a shortcut interface, a function interface, and the like, and the target interface may be preset or may be set by the user, and may be specifically determined according to an actual situation.
Optionally, after the moving the first virtual object from the first position of the virtual screen to the first spatial position and performing the first control operation on the first object associated with the second surface, the method further includes:
receiving a second input of the user to a third face of the first virtual sub-object of the virtual object displayed on the virtual screen;
in response to the second input, moving the first virtual object from the first spatial position to the first position, performing a second control operation on the first object associated with the third face. In this way, the control operation can be simply and quickly executed on the object associated with the first virtual sub-object through the second input of the user.
Alternatively, as shown in FIG. 4, a second input from the user to the third side of the first virtual sub-object is received, and in response to the second input, the first virtual sub-object 203 may be moved from the first spatial position of the virtual object, e.g., FIG. 4 (b), to the first position, e.g., FIG. 4 (a), and a second control operation may be performed on the first object associated with the third side.
Alternatively, the second control operation may be a reverse operation of the first control operation, such as an undo operation, a restore operation, or the like. Illustratively, the first control operation is opening an application and the second control operation may be closing the application.
Illustratively, the first object associated with the third surface is application a, and after the user moves the first virtual sub-object from the first position to the first spatial position through the first input, a target interface of application a may be displayed. The user may close application a by moving the first virtual sub-object from the first spatial location to the first location via a second input to the third face.
Optionally, step 102 comprises: step a, splitting the virtual object into the N virtual sub-objects. Therefore, the user can split the virtual object, and the user can conveniently operate the split virtual sub-object.
Optionally, the virtual object may be split into N virtual sub-objects.
Illustratively, as shown in fig. 5 (a), the virtual object 201 includes 9 virtual sub-objects, and the user may split the virtual object 201 into 9 virtual sub-objects by a first input, and as shown in fig. 5 (b), the user may move the position of the split virtual sub-object to an arbitrary position.
Optionally, splitting the virtual object into N virtual sub-objects may trigger a control operation on the associated object.
Optionally, step a is followed by: step b, receiving a third input of the user to Q virtual sub-objects in the N virtual sub-objects;
optionally, the third input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be specifically set according to an actual need, and the embodiment of the present invention is not limited. When the third input is executed, the third input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multipoint input, such as sliding input, click input, etc. with two fingers simultaneously. The third input may also be a third operation.
Step c, responding to the third input, updating the display positions of W virtual sub-objects in the N virtual sub-objects, and recombining the N virtual sub-objects after the display positions of the W virtual sub-objects are updated into at least one second virtual sub-object;
wherein Q and W are positive integers, Q is less than or equal to N, and W is less than or equal to N. In this way, the user may update the display positions of some or all of the N virtual sub-objects and recombine the updated display positions into at least one second virtual sub-object.
Optionally, recombining the N virtual sub-objects after the display positions of the W virtual sub-objects are updated into at least one second virtual sub-object may trigger a control operation on an associated object.
Alternatively, part or all of the W virtual sub-objects may be reorganized into at least one second virtual sub-object.
Optionally, the recomposed second virtual object can no longer split it.
For example, the user may update the display positions of W virtual sub-objects among N virtual sub-objects shown in fig. 5 (b), and after the user updates the display positions of 6 virtual sub-objects among 9 virtual sub-objects, a second virtual sub-object may be obtained as shown in fig. 6 (a), and 2 second virtual sub-objects may be obtained as shown in fig. 6 (b).
Optionally, in a case where the first virtual sub-object includes S faces, step 101 includes:
splitting the first virtual sub-object into the S second virtual sub-objects, wherein each second virtual sub-object is used for indicating a first object associated with one face of the first virtual sub-object, and S is a positive integer. Therefore, the first virtual sub-object can be split into S second virtual sub-objects, and the subsequent operation on the second virtual sub-objects by a user is facilitated.
Optionally, the second virtual sub-object may be a two-dimensional virtual object or a three-dimensional virtual object, which may be determined according to an actual situation, and this is not limited in this embodiment of the present invention.
For example, as shown in fig. 7 (a), the user may select a first virtual sub-object desired to be split, and use two hands to pinch the first sub-object with the index finger and the thumb, respectively, and then both the index finger and the thumb of the two hands are opened outwards, so that the first sub-object may be split into S second virtual sub-objects, resulting in that the second virtual sub-objects are two-dimensional objects as shown in fig. 7 (b). Each second virtual sub-object is for indicating a first object associated with a corresponding face of the first virtual sub-object.
Optionally, splitting the first virtual sub-object into the S second virtual sub-objects may trigger a control operation on an associated object.
Optionally, after splitting the first virtual sub-object into the S second virtual sub-objects, the method further includes:
receiving fifth input of the user to R second virtual sub-objects in the S second virtual sub-objects;
in response to the fifth input, updating display positions of G second virtual sub-objects in the S second virtual sub-objects, and recombining the S virtual sub-objects with the updated display positions of the G virtual sub-objects into a third virtual object;
wherein S, R and G are positive integers, R is less than or equal to S, and G is less than or equal to S.
Optionally, the S virtual sub-objects whose display positions are updated are recombined into a third virtual object, which may trigger a control operation on the associated object.
Optionally, the fifth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be set specifically according to an actual need, and the embodiment of the present invention is not limited. When the fifth input is executed, the fifth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The fifth input may also be a fifth operation.
For example, as shown in fig. 8 (a), after the first virtual sub-object is split into the 6 second virtual sub-objects, the user may update the positions of G second virtual sub-objects in the 6 second virtual sub-objects, as shown in fig. 8 (b), the display positions of the second virtual sub-objects a and b may be exchanged, and the updated 6 second virtual sub-objects may be recombined into a third virtual sub-object, which is a three-dimensional object.
Optionally, step 101 comprises: updating the display size of the first virtual child object.
Illustratively, as shown in fig. 9 (a), the user may place a hand at the first sub-object and pinch the index finger and thumb together, placing in the first virtual sub-object, and the border of the first virtual sub-object may remain highlighted, prompting the user that the first virtual sub-object has been controlled on this hand at this time. The user can control the size of the first virtual sub-object with the index finger and the thumb of the other hand while controlling the first virtual sub-object with one hand, as shown in fig. 9 (b), the first virtual sub-object is enlarged as the two fingers are expanded outward; when the two fingers pinch back, the first virtual sub-object shrinks. Control of the first virtual sub-object may be released until the first virtual sub-object is scaled to the size desired by the user. Of course, the user may also update the display size of the first virtual sub-object with one hand, such as with the index finger and thumb of one hand.
Optionally, updating the display size of the first virtual sub-object may trigger a control operation on an object associated with the first virtual sub-object. Therefore, the display state of the first virtual object can be updated, and the object associated with the first virtual sub-object can be conveniently and quickly controlled and operated.
Optionally, the first virtual sub-object is located at a second spatial position; the current mode is the first display mode;
step 102 comprises: updating the first virtual sub-object from the second space position to a third space position, and switching the first display mode to a second display mode. Thus, the mode switching can be performed conveniently and quickly.
Alternatively, the user may select the first virtual sub-object, then enlarge it, and zoom in toward himself until he enters the first virtual sub-object space, i.e., update the first virtual sub-object from the second spatial position to the third spatial position. At this time, the user may enter the virtual space in the first virtual sub-object, and may operate the first virtual sub-object in the first virtual sub-object space, that is, switch the first display mode to the second display mode, thereby implementing fast switching of the display modes.
Optionally, before receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen, the method further includes: receiving a fourth input from the user;
in response to the fourth input, displaying a first virtual sub-object of a virtual object on a virtual screen, the first virtual sub-object being created based on the fourth input. In this way, the user can create the first virtual sub-object conveniently and quickly.
Optionally, the fourth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fourth input may also be a fourth operation. When the fourth input is executed, the fourth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multipoint input, such as sliding input, click input, etc. with two fingers simultaneously.
Illustratively, as shown in fig. 10 (a), the user can bring two fingers, such as the index finger and the thumb, close together and then pull them apart from each other, and the device can recognize the positions of the tips of the two fingers, and create a three-dimensional first virtual sub-object between the two fingers, centered on the point of intersection where the two fingers begin, as shown in fig. 10 (b). The body diagonal of the created first virtual sub-object may be the distance between two fingers, such as the distance between the index finger and the thumb, and optionally, the electronic device may optimize and calibrate the virtual sub-object due to the low accuracy of the positions of the fingers of the user.
Optionally, after the first virtual sub-object of the virtual object is displayed on the virtual screen, a sixth input of the user to the first face of the first virtual object is further received, and in response to the sixth input, the association relationship between the first face and the object is established.
Optionally, the fourth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a floating touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fourth input may also be a fourth operation. When the fourth input is executed, the fourth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Alternatively, the user may move the first virtual sub-object, as shown in fig. 11 (a), the user may place the hand at the first sub-object and pinch the index finger and thumb together, placing in the first virtual sub-object, and the border of the first virtual sub-object may remain highlighted, prompting the user that the first virtual sub-object has been controlled on this hand at this time; at this time, the first virtual sub-object may move along with the hand of the user, as shown in fig. 11 (b), and then the user may move the hand to the target position of the first virtual sub-object and release the two fingers, so that the controlled first virtual sub-object may be moved to the target position.
Alternatively, moving the first virtual sub-object may be moving the first virtual sub-object itself, or moving an object associated with the first virtual sub-object.
Alternatively, moving the first virtual sub-object may trigger a control operation on an object associated with the first virtual sub-object. Therefore, the display state of the first virtual object can be updated, and the object associated with the first virtual sub-object can be conveniently and quickly controlled and operated.
In this way, the device may create the first virtual sub-object according to the input of the user, and the device may also control the first virtual sub-object according to the input of the user, such as moving, rotating, and the like. The display state of the virtual object can be updated through different control modes of the first virtual sub-object, so that different control operations on the object associated with the first virtual sub-object can be triggered, and the operation on the object associated with the first virtual sub-object can be realized through one-step operation.
Optionally, in a second display area of the virtual object, displaying M identifiers, each identifier indicating a different second object;
wherein the M identifiers comprise a first identifier indicating the target object, and M is a positive integer.
Alternatively, the positional relationship between the second display region and the first display region is not particularly limited.
Optionally, different objects may be included in the second display area, the objects may include, but are not limited to, text, audio-video files, applications, images, etc., and the objects are displayed in the second display area by corresponding identifiers, which may be icons, symbols, etc., each identifier indicating a different object.
Illustratively, the identification may indicate any information of any application.
Optionally, step 101 comprises: and receiving a first sub-input of the first identifier and a second sub-input of the first virtual sub-object from a user, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first virtual sub-object.
Optionally, the first sub-input or the second sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be specifically set according to an actual need, and the embodiment of the present invention is not limited. When the first sub-input or the second sub-input is executed, the first sub-input or the second sub-input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The first sub-input may also be a first sub-operation and the second sub-input may also be a second sub-operation.
Illustratively, the user can click the first identifier to select the first identifier, and then click the first virtual sub-object to select the first virtual sub-object; the user may also drag the first icon to the first virtual sub-object.
Illustratively, the user points a finger at an area of the virtual screen where the first identifier is located, and drags the first identifier to the first virtual child object.
Optionally, the pointing of the finger by the user to the area of the first identifier on the virtual screen may include, but is not limited to, placing the finger by the user on the area of the first identifier on the virtual screen, or pointing the finger of the user to the area of the first identifier on the virtual screen, that is, the finger of the user is not in the area of the first identifier, but points to the area of the first identifier, and has a certain distance from the area of the first identifier.
Optionally, dragging the first identifier to the first virtual sub-object may include, but is not limited to, a user dragging the first identifier to the first virtual sub-object on the virtual screen, or a user dragging the first identifier to an area corresponding to the first virtual sub-object on the virtual screen, that is, when the first identifier is dragged to the area corresponding to the first virtual sub-object, a projection of the first identifier in a plane where the virtual object is located within the first virtual sub-object. For example dragging the first identification to the area directly in front of the first virtual sub-object, which refers to the direction closer to the user.
Optionally, step 101 comprises: receiving a third sub-input of the first identifier by the user, wherein the third sub-input is used for controlling the first identifier to move to the first virtual sub-object along with the hand of the user.
Optionally, the third sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be specifically set according to an actual need, and the embodiment of the present invention is not limited. When the third sub-input is executed, the third sub-input may be a single-point input, such as a sliding input, a click input, or the like performed by using a single finger; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The third sub-input may also be a third sub-operation.
For example, the third sub-input may be a gesture of the user, for example, the hand of the user may point to the area where the first identifier is located, and a gesture of taking the first identifier is made, then the first identifier follows the movement of the hand of the user, where the hand of the user moves, where the first identifier moves, the hand of the user moves to the first virtual sub-object, and the first identifier moves to the first virtual sub-object.
Optionally, the moving of the hand of the user to the first virtual sub-object may include, but is not limited to, moving the hand of the user to the first virtual sub-object on the virtual screen, or moving the hand of the user to an area corresponding to the first virtual sub-object on the virtual screen, that is, when the hand of the user moves to the area corresponding to the first virtual sub-object, a projection of the hand of the user in a plane where the virtual object is located within the first virtual sub-object. For example, the user's hand moves to an area directly in front of the first virtual sub-object, which refers to a direction closer to the user.
Optionally, the moving of the first identifier to the first virtual sub-object may include, but is not limited to, moving the first identifier to the first virtual sub-object on the virtual screen, or moving the first identifier to an area corresponding to the first virtual sub-object on the virtual screen, that is, when the first identifier is moved to the area corresponding to the first virtual sub-object, a projection of the first identifier in a plane where the virtual object is located in the first virtual sub-object. For example, the first identifier moves to an area directly in front of the first virtual sub-object, which refers to a direction closer to the user.
Optionally, the head mounted device comprises a camera;
before step 101 receives a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen, the method further includes:
acquiring an image acquired by a camera;
and displaying a virtual object on a virtual screen under the condition that the target real object is included in the image.
Optionally, the camera captures images in a real environment, the real environment being within a viewing angle range of the user.
Optionally, in a case that the image acquired by the camera in real time does not include the target real object, that is, the user's sight line is away from the target real object, the display of the virtual object is cancelled on the virtual screen, and when the user's sight line returns to the target real object again, the virtual object is displayed on the virtual screen.
Optionally, the first target area is the same as the area where the target object is located, or the first target area is a part of the area where the target object is located, or the first target area includes the area where the target object is located, or the first target area is adjacent to the area where the target object is located, for example, the first target area is located in front of, above, or the like the area where the target object is located.
Optionally, the case that the image includes the target real object includes: the target object appears in the image or the image comprises the target object, and the environment around the target object is the target environment. For example, the target object is a sofa, the target environment is that a tea table is arranged 0.5 m in front of the sofa, a television is arranged 1 m in front of the tea table, and a water dispenser is arranged 0.3 m in the left side of the sofa.
In the case that the image in the real environment captured by the camera includes the target real object, the virtual object is displayed in the first target area of the virtual screen, and illustratively, the image in the real environment captured by the camera includes the table, and then the virtual object is displayed in the first target area of the virtual screen, where the first target area is located on the upper surface of the table, or the first target area is located directly above the upper surface of the table.
In the embodiment of the invention, the virtual object is displayed in the first target area of the virtual screen by acquiring the image acquired by the camera under the condition that the image comprises the target object, so that the virtual object can be displayed when the visual angle of the electronic equipment returns to the target area.
Optionally, the target area of the virtual screen comprises a second identifier;
optionally, the second identifier is used to indicate the virtual object.
Before the image of acquireing the camera collection, still include:
receiving an eighth input of the second identification and the third spatial position by the user;
optionally, the eighth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The ninth input may also be a ninth operation. When the eighth input is executed, the input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
And responding to the eighth input, and displaying a virtual object in an area corresponding to the third spatial position on the virtual screen, wherein the third spatial position is the position of the target real object.
Optionally, the third target region is the same as the third spatial region, or the third target region is a part of the third spatial region, or the third target region includes the third spatial region, or the third target region is adjacent to the third spatial region, for example, the third target region is located in front of, above, or the like the third spatial region.
Illustratively, as shown in fig. 14 (a), the second identifier 1401 is located in a second target area 1402 of the virtual screen, and a fifth input of the user to the second identifier 1401 and a third spatial area 1403 is received, for example, the second identifier is dragged to the third spatial area, the target object is a wall, and then as shown in fig. 14 (b), a virtual object 1301 is displayed in the third target area, which is a part of the third spatial area 1403, and the user may continue to resize the virtual object by using a finger.
The head-mounted device stores information of a virtual object with a set area and an adjusted size, for example, spatial coordinates and surrounding environment information of the virtual object are stored, the virtual object is on one wall, the right half of the wall includes a door, the left side of the wall is vertically connected with the other wall including a window, image information of the surrounding environment of the target object can be stored, when the visual angle of a user returns to the area where the target object is located, the virtual object is displayed, further illustratively, when the visual angle of the user falls in a third space area, the camera collects an image in the real environment, the collected image is compared with the previously stored image of the surrounding environment of the target object, and the virtual object is displayed in the third target area under the condition that the target object, the position information of the surrounding environment and the image information are all matched.
Optionally, the second identifier is always displayed on a virtual screen of the head-mounted device, that is, the user may see the second identifier at any time, the user may drag the second identifier to any one or more spatial regions, the head-mounted device may record spatial coordinates of the virtual object, and the user may see the virtual object in the plurality of spatial regions.
In the embodiment of the present invention, by receiving a fifth input of the user to the second identifier and the third spatial region, and in response to the fifth input, displaying the virtual object in a third target region corresponding to the third spatial region on the virtual screen, it is possible to implement that the user can place the virtual object in a plurality of spatial regions through a simple input, and the user can see the virtual object in the plurality of spatial regions.
As shown in fig. 15, an embodiment of the present invention provides a head-mounted device 120, where the head-mounted device 120 includes:
a first receiving module 121, configured to receive a first input of a first virtual sub-object of a virtual object displayed on a virtual screen from a user;
a first processing module 122, configured to perform a first control operation in response to the first input, where the first control operation is associated with the first input;
wherein the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different first objects, and N is a positive integer.
Optionally, the first side of the first virtual sub-object is oriented in the same direction as the virtual screen; the first processing module 122, specifically configured to rotate the P virtual sub-objects in response to the first input, such that second faces of the P virtual sub-objects face the user; wherein P is a positive integer and is less than or equal to N.
Optionally, the first receiving module 121 is specifically configured to receive a first input of a user to a third surface of a first virtual sub-object of a virtual object displayed on a virtual screen; the first processing module is specifically configured to, in response to the first input, move the first virtual sub-object from a first position of the virtual object to a first spatial position, and perform the first control operation on the first object associated with the third face.
Optionally, the head-mounted device further comprises: the second receiving module is used for receiving a second input of a user to a third surface of the first virtual sub-object of the virtual object displayed on the virtual screen; a second processing module, configured to move the first virtual object from the first spatial position to the first position in response to the second input, and perform a second control operation on the first object associated with the third facet.
Optionally, the first processing module is specifically configured to split the virtual object into the N virtual sub-objects.
Optionally, the head-mounted device further comprises: a third receiving module, configured to receive a third input of the user to Q virtual sub-objects in the N virtual sub-objects; a third processing module, configured to update display positions of W virtual sub-objects in the N virtual sub-objects in response to the third input, and recombine the N virtual sub-objects with the updated display positions of the W virtual sub-objects into at least one second virtual object; wherein Q and W are positive integers, Q is less than or equal to N, and W is less than or equal to N.
Optionally, the first processing module is specifically configured to, when the first virtual sub-object includes S surfaces, split the first virtual sub-object into S second virtual sub-objects, where each second virtual sub-object is used to indicate a first object associated with one surface of the first virtual sub-object, and S is a positive integer.
Optionally, the first processing module is specifically configured to update a display size of the first virtual sub-object.
Optionally, the first virtual sub-object is located at a second spatial location; the current mode is the first display mode; the first processing module is specifically configured to update the first virtual sub-object from the second spatial position to a third spatial position, and switch the first display mode to a second display mode.
Optionally, the head-mounted device further comprises: the fourth receiving module is used for receiving a fourth input of the user; a fourth processing module, configured to display a first virtual sub-object of a virtual object on a virtual screen in response to the fourth input, the first virtual sub-object being created based on the fourth input.
Optionally, the head mounted device further comprises: the first display module is used for displaying M identifiers in a second display area of the target interface, and each identifier indicates a different second object; wherein the M identifiers comprise a first identifier, the first identifier indicates the target object, and M is a positive integer.
Optionally, the first receiving module is specifically configured to receive a first sub-input of the first identifier and a second sub-input of the first virtual sub-object by the user, where the first sub-input is used to select the first identifier, and the second sub-input is used to select the first virtual sub-object.
Optionally, the first receiving module is specifically configured to receive a third sub-input of the first identifier by the user, where the third sub-input is used to control the first identifier to move to the first virtual sub-object along with the hand of the user.
Optionally, the head-mounted device further comprises: the acquisition module is used for acquiring the image acquired by the camera; and the second display module is used for displaying the virtual object on the virtual screen under the condition that the image comprises the target object.
Optionally, the target area of the virtual screen includes a second identifier; the head-mounted device further comprises: a fifth receiving module, configured to receive a fifth input of the second identifier and the target spatial location from the user; and the third display module is used for responding to the fifth input, displaying a virtual object in an area corresponding to the target space position on the virtual screen, wherein the target space position is the position of the target real object.
Optionally, the N virtual sub-objects are separated by a separation identifier.
The head-mounted device provided by the embodiment of the present invention can implement each process implemented by the head-mounted device in the above method embodiments, and is not described herein again to avoid repetition.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different objects, and N is a positive integer. Switching back and forth between different pages of the electronic equipment can be avoided, and the operation is simple and convenient.
Fig. 16 is a schematic hardware structure diagram of a head-mounted device for implementing various embodiments of the present invention, and as shown in fig. 16, the head-mounted device 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the configuration of the head-mounted device shown in fig. 7 does not constitute a limitation of the head-mounted device, and that the head-mounted device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In embodiments of the present invention, the head-mounted device includes, but is not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, and the like.
Wherein the user input unit 707 is configured to receive a first input by a user to a first virtual sub-object of a virtual object displayed on the virtual screen; a processor 710 for performing a first control operation in response to the first input, the first control operation being associated with the first input; wherein the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different objects, and N is a positive integer.
The embodiment of the invention provides a head-mounted device, which can receive a first input of a first virtual sub-object of a virtual object displayed on a virtual screen from a user; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different objects, N is a positive integer, switching and finding back and forth between different pages of the electronic equipment can be avoided, and the operation is simple and convenient.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio unit 701 can also communicate with a network and other devices through a wireless communication system.
The head-mounted device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the head-mounted device 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The head-mounted device 700 also includes at least one sensor 705, such as a gesture sensor, a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the head-mounted device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of a head-mounted device (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), and identify related functions of vibration (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The display unit 706 may include a hologram device, which may form a three-dimensional (3D) image (hologram) in the air by using light interference, a projector (not shown in the drawing). The projector may display an image by projecting light onto a screen. The screen may be located inside or outside the head-mounted device.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the head-mounted device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the head-mounted device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the head-mounted device, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the head-mounted apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the headset 700 or may be used to transmit data between the headset 700 and an external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the head-mounted device, connects various parts of the whole head-mounted device by using various interfaces and lines, and performs various functions of the head-mounted device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby performing overall monitoring of the head-mounted device. Processor 710 may include one or more processing units; alternatively, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710, and that the processor 710 may detect a user's gesture and determine a control command corresponding to the gesture in accordance with embodiments of the present invention.
The head-mounted device 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and optionally, the power supply 711 may be logically coupled to the processor 710 via a power management system to implement functions such as managing charging, discharging, and power consumption via the power management system.
In addition, the head-mounted device 700 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides a head-mounted device, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program, when executed by the processor 710, implements each process of the operation control method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the operation control method embodiment, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a head-mounted device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (26)

1. An operation control method applied to a head-mounted device, the method comprising:
receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen;
in response to the first input, performing a first control operation, the first control operation being associated with the first input;
wherein the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different objects, and N is a positive integer;
the receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen includes:
receiving a first input of a user to a third face of a first virtual sub-object of a virtual object displayed on a virtual screen;
the performing, in response to the first input, a first control operation includes:
in response to the first input, moving the first virtual sub-object from the first position of the virtual object to a first spatial position and performing the first control operation on the first object associated with the third face.
2. The method of claim 1, wherein the first side of the first virtual sub-object is oriented in the same direction as the virtual screen;
the performing, in response to the first input, a first control operation includes:
in response to the first input, rotating the P virtual sub-objects such that second faces of the P virtual sub-objects face the user;
wherein P is a positive integer and is less than or equal to N.
3. The method of claim 1, wherein after moving the first virtual sub-object from the first position of the virtual screen to the first spatial position and performing the first control operation on the third facet-associated first object, further comprising:
receiving a second input of a user to a third face of a first virtual sub-object of the virtual object displayed on the virtual screen;
in response to the second input, moving the first virtual sub-object from the first spatial position to the first position, performing a second control operation on the first object associated with the third face.
4. The method of claim 1, wherein performing a first control operation in response to the first input comprises:
splitting the virtual object into the N virtual sub-objects.
5. The method of claim 4, wherein after splitting the virtual object into the N virtual sub-objects, further comprising:
receiving a third input of the user to Q virtual sub-objects in the N virtual sub-objects;
in response to the third input, updating the display positions of W virtual sub-objects in the N virtual sub-objects, and recombining the N virtual sub-objects with the updated display positions of the W virtual sub-objects into at least one second virtual sub-object;
wherein Q and W are positive integers, Q is less than or equal to N, and W is less than or equal to N.
6. The method of claim 1, wherein, in the case that the first virtual sub-object comprises an S-surface, said performing, in response to the first input, a first control operation comprises:
splitting the first virtual sub-object into S second virtual sub-objects, wherein each second virtual sub-object is used for indicating a first object associated with one surface of the first virtual sub-object, and S is a positive integer.
7. The method of claim 1, wherein the performing a first control operation comprises:
updating a display size of the first virtual sub-object.
8. The method of claim 1, wherein the first virtual sub-object is located at a second spatial location; the current mode is the first display mode;
the performing, in response to the first input, a first control operation includes:
updating the first virtual sub-object from the second space position to a third space position, and switching the first display mode to a second display mode.
9. The method of claim 1, wherein prior to receiving a first input by a user into a first virtual sub-object of a virtual object displayed on a virtual screen, further comprising:
receiving a fourth input from the user;
in response to the fourth input, displaying a first virtual sub-object of a virtual object on a virtual screen, the first virtual sub-object being created based on the fourth input.
10. The method of claim 1, prior to receiving a user input regarding a first virtual sub-object of a virtual object displayed on a virtual screen, further comprising:
displaying M identifiers in a second display area of the virtual object, wherein each identifier indicates a different second object;
the M identifications comprise a first identification, the first identification indicates a target object, and M is a positive integer.
11. The method of claim 10, wherein after displaying the M identifiers, the receiving a user input of a first virtual sub-object of a virtual object displayed on a virtual screen comprises:
and receiving a first sub-input of the first identifier and a second sub-input of the first virtual sub-object from a user, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first virtual sub-object.
12. The method of claim 10, wherein after displaying the M identifiers, the receiving a first input from a user into a first virtual sub-object of a virtual object displayed on a virtual screen comprises:
receiving a third sub-input of the first identifier by a user, wherein the third sub-input is used for controlling the first identifier to move to the first virtual sub-object along with the hand of the user.
13. The method of claim 1, wherein the head mounted device comprises a camera;
before the receiving of the first input of the user to the first virtual sub-object of the virtual object displayed on the virtual screen, the method further includes:
acquiring an image acquired by the camera;
and displaying a virtual object on a virtual screen under the condition that the target real object is included in the image.
14. The method of claim 1, wherein the target area of the virtual screen comprises a second identifier;
before the image of acquireing the camera collection, still include:
receiving a fifth input of the user to the second identification and the third spatial position;
and responding to the fifth input, and displaying a virtual object in an area corresponding to the third spatial position on the virtual screen, wherein the third spatial position is the position of the target real object.
15. The method of claim 1, wherein the N virtual sub-objects are separated by a separation identifier.
16. A head-mounted device, comprising:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of a user on a first virtual sub-object of a virtual object displayed on a virtual screen;
a first processing module for performing a first control operation in response to the first input, the first control operation being associated with the first input;
wherein the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different first objects, and N is a positive integer;
the first receiving module is specifically used for receiving a first input of a user to a third surface of a first virtual sub-object of a virtual object displayed on a virtual screen;
the first processing module is specifically configured to, in response to the first input, move the first virtual sub-object from a first position of the virtual object to a first spatial position, and perform the first control operation on the first object associated with the third face.
17. The head-mounted device of claim 16, wherein the first face of the first virtual sub-object is oriented the same as the virtual screen;
the first processing module is specifically configured to, in response to the first input, rotate the P virtual sub-objects such that second faces of the P virtual sub-objects face the user; wherein P is a positive integer and is less than or equal to N.
18. The head-mounted device of claim 16, further comprising:
the second receiving module is used for receiving a second input of a user to a third surface of the first virtual sub-object of the virtual object displayed on the virtual screen;
a second processing module, configured to move the first virtual sub-object from the first spatial position to the first position in response to the second input, and perform a second control operation on the first object associated with the third facet.
19. The head-mounted device of claim 16, wherein the first processing module is specifically configured to split the virtual object into the N virtual sub-objects.
20. The head-mounted apparatus of claim 19, further comprising:
a third receiving module, configured to receive a third input of the user to Q virtual sub-objects in the N virtual sub-objects;
a third processing module, configured to update display positions of W virtual sub-objects in the N virtual sub-objects in response to the third input, and recombine the N virtual sub-objects with the updated display positions of the W virtual sub-objects into a second virtual object;
wherein Q and W are positive integers, Q is less than or equal to N, and W is less than or equal to N.
21. The head-mounted device according to claim 16, wherein the first processing module is specifically configured to, in a case where the first virtual sub-object includes S faces, split the first virtual sub-object into S second virtual sub-objects, each of the second virtual sub-objects being used to indicate a first object associated with one face of the first virtual sub-object, S being a positive integer.
22. The head-mounted device of claim 16, wherein the first processing module is configured to update a display size of the first virtual sub-object.
23. The head-mounted device of claim 16, wherein the first virtual sub-object is located at a second spatial location; the current mode is the first display mode;
the first processing module is specifically configured to update the first virtual sub-object from the second spatial position to a third spatial position, and switch the first display mode to a second display mode.
24. The head-mounted device of claim 16, further comprising:
the fourth receiving module is used for receiving a fourth input of the user;
a fourth processing module, configured to display a first virtual sub-object of a virtual object on a virtual screen in response to the fourth input, the first virtual sub-object being created based on the fourth input.
25. A head-mounted device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the operation control method according to any one of claims 1 to 15.
26. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the operation control method according to any one of claims 1 to 15.
CN202010031306.9A 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium Active CN111352505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031306.9A CN111352505B (en) 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031306.9A CN111352505B (en) 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium

Publications (2)

Publication Number Publication Date
CN111352505A CN111352505A (en) 2020-06-30
CN111352505B true CN111352505B (en) 2023-02-21

Family

ID=71192243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031306.9A Active CN111352505B (en) 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium

Country Status (1)

Country Link
CN (1) CN111352505B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674430A (en) * 2021-08-24 2021-11-19 上海电气集团股份有限公司 Virtual model positioning and registering method and device, augmented reality equipment and storage medium
CN114879885B (en) * 2022-04-18 2024-03-22 上海星阑信息科技有限公司 Virtual object grouping control method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915232A (en) * 2011-08-01 2013-02-06 华为技术有限公司 3D (three-dimensional) controls interaction method and communication terminal
CN103309562A (en) * 2013-06-28 2013-09-18 北京小米科技有限责任公司 Desktop display method, desktop display device and mobile terminal
CN103816659A (en) * 2012-11-19 2014-05-28 维基帕德公司 Virtual multiple sided virtual rotatable use interface icon queue
CN103914238A (en) * 2012-12-30 2014-07-09 网易(杭州)网络有限公司 Method and device for achieving integration of controls in interface
CN106200898A (en) * 2016-06-24 2016-12-07 张睿卿 Virtual reality software platform system
CN108415641A (en) * 2018-03-05 2018-08-17 维沃移动通信有限公司 A kind of processing method and mobile terminal of icon
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387937A (en) * 2007-09-14 2009-03-18 英业达股份有限公司 Three-dimensional dynamic diagram display interface and display method thereof
WO2010108499A2 (en) * 2009-03-22 2010-09-30 Algreatly Cherif Atia 3d navigation method and system
KR101728804B1 (en) * 2010-10-12 2017-04-20 엘지전자 주식회사 Mobile terminal
CN103019595B (en) * 2012-12-05 2016-03-16 北京百度网讯科技有限公司 Terminal device and method for switching theme thereof
JP6209906B2 (en) * 2013-09-05 2017-10-11 セイコーエプソン株式会社 Head-mounted display device, method for controlling head-mounted display device, and image display system
CN105302407A (en) * 2014-06-23 2016-02-03 中兴通讯股份有限公司 Application icon display method and apparatus
CN104699375A (en) * 2015-04-05 2015-06-10 赵彬 Three-dimensional storing and opening method for software icons
CN106802754B (en) * 2017-01-12 2020-09-04 珠海市横琴新区龙族科技有限公司 Electronic equipment icon display method and device
CN108319408A (en) * 2018-02-08 2018-07-24 上海爱优威软件开发有限公司 Stereogram target operating method and system
CN108665553B (en) * 2018-04-28 2023-03-17 腾讯科技(深圳)有限公司 Method and equipment for realizing virtual scene conversion
CN110471588B (en) * 2019-07-19 2021-08-27 维沃移动通信有限公司 Application icon sorting method and device and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915232A (en) * 2011-08-01 2013-02-06 华为技术有限公司 3D (three-dimensional) controls interaction method and communication terminal
CN103816659A (en) * 2012-11-19 2014-05-28 维基帕德公司 Virtual multiple sided virtual rotatable use interface icon queue
CN103914238A (en) * 2012-12-30 2014-07-09 网易(杭州)网络有限公司 Method and device for achieving integration of controls in interface
CN103309562A (en) * 2013-06-28 2013-09-18 北京小米科技有限责任公司 Desktop display method, desktop display device and mobile terminal
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN106200898A (en) * 2016-06-24 2016-12-07 张睿卿 Virtual reality software platform system
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN108415641A (en) * 2018-03-05 2018-08-17 维沃移动通信有限公司 A kind of processing method and mobile terminal of icon

Also Published As

Publication number Publication date
CN111352505A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
US20210405761A1 (en) Augmented reality experiences with object manipulation
CN108499105B (en) Method, device and storage medium for adjusting visual angle in virtual environment
KR20230025914A (en) Augmented reality experiences using audio and text captions
CN111258420B (en) Information interaction method, head-mounted device and medium
US9651782B2 (en) Wearable tracking device
US10983663B2 (en) Displaying applications
KR20220032059A (en) Touch free interface for augmented reality systems
US11714540B2 (en) Remote touch detection enabled by peripheral device
US11954268B2 (en) Augmented reality eyewear 3D painting
WO2021136266A1 (en) Virtual image synchronization method and wearable device
US11360550B2 (en) IMU for touch detection
CN111352505B (en) Operation control method, head-mounted device, and medium
CN112817453A (en) Virtual reality equipment and sight following method of object in virtual reality scene
KR20230113374A (en) head-related transfer function
CN111240483B (en) Operation control method, head-mounted device, and medium
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
CN110531905B (en) Icon control method and terminal
CN111338521A (en) Icon display control method and electronic equipment
CN111258482A (en) Information sharing method, head-mounted device, and medium
CN114115544B (en) Man-machine interaction method, three-dimensional display device and storage medium
CN111246014B (en) Communication method, head-mounted device, and medium
CN111782053B (en) Model editing method, device, equipment and storage medium
CN111143799A (en) Unlocking method and electronic equipment
CN111104656A (en) Unlocking method and electronic equipment
CN111208903B (en) Information transmission method, wearable device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant