CN111240483B - Operation control method, head-mounted device, and medium - Google Patents

Operation control method, head-mounted device, and medium Download PDF

Info

Publication number
CN111240483B
CN111240483B CN202010031307.3A CN202010031307A CN111240483B CN 111240483 B CN111240483 B CN 111240483B CN 202010031307 A CN202010031307 A CN 202010031307A CN 111240483 B CN111240483 B CN 111240483B
Authority
CN
China
Prior art keywords
virtual
input
sub
virtual sub
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010031307.3A
Other languages
Chinese (zh)
Other versions
CN111240483A (en
Inventor
陈喆
刘琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010031307.3A priority Critical patent/CN111240483B/en
Publication of CN111240483A publication Critical patent/CN111240483A/en
Application granted granted Critical
Publication of CN111240483B publication Critical patent/CN111240483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention discloses an operation control method, a head-mounted device and a medium, relates to the technical field of communication, and solves the problems that in the prior art, the control operation of an object is complex and the operation is inconvenient. The method comprises the following steps: receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.

Description

Operation control method, head-mounted device, and medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an operation control method, a head-mounted device and a medium.
Background
With the continuous development of internet technology and electronic devices, the kinds and the number of programs are increasing. When a user uses the electronic equipment, if objects such as texts, pictures, application programs and the like need to be controlled, the user needs to switch and find the objects back and forth among different pages or different programs of the electronic equipment, and a large number of operations such as clicking and scratching can be performed on a screen through fingers, so that the process is complicated and the operation is inconvenient.
Disclosure of Invention
The embodiment of the invention provides an operation control method, which can solve the problems of more complicated control operation and inconvenient operation of an object in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an operation control method, including:
receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen;
in response to the first input, performing a first control operation, the first control operation being associated with the first input;
the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
In a second aspect, an embodiment of the present invention provides a head-mounted device, including:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of a user on a first virtual sub-object of a virtual object displayed on a virtual screen;
a first processing module for performing a first control operation in response to the first input, the first control operation being associated with the first input;
the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different target objects, and N is a positive integer.
In a third aspect, an embodiment of the present invention provides a head-mounted device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the operation control method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the operation control method as in the first aspect.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-objects, different virtual sub-objects are associated with different objects, N is a positive integer, namely, by receiving first input of a user, first control operation associated with the first input can be executed, back and forth switching and finding among different pages of the electronic equipment can be avoided, and the operation is simple and convenient.
Drawings
FIG. 1 is a flow chart of an operation control method provided by an embodiment of the present invention;
fig. 2 is one of schematic diagrams of a virtual object of an operation control method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of merging a first virtual sub-object and a second virtual sub-object in the operation control method according to the embodiment of the present invention;
fig. 4 is a schematic diagram of splitting a first virtual sub-object in the operation control method according to the embodiment of the present invention;
fig. 5 is a schematic diagram of moving a first virtual sub-object according to an operation control method provided in an embodiment of the present invention;
fig. 6 is a schematic diagram of creating a first virtual sub-object in the operation control method according to the embodiment of the present invention;
fig. 7 is a schematic diagram of scaling a first virtual sub-object in the operation control method according to the embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a first virtual sub-object being rotated or flipped by the operation control method according to the embodiment of the present invention;
fig. 9 is a schematic diagram of a mobile first identifier of an operation control method according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating a display position of a setting virtual object according to an operation control method provided in an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention;
fig. 12 is a hardware schematic diagram of a head-mounted device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides an operation control method, which comprises the steps of receiving first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
Virtual Reality (VR) technology is a computer simulation system technology that creates and experiences a Virtual world. It utilizes a computer to create a simulated environment into which a user is immersed using a systematic simulation of interactive three-dimensional dynamic views and physical behaviors with multi-source information fusion.
Augmented Reality (AR) technology is a technology that integrates real world information and virtual world information, and virtual information content is superimposed in the real world through various sensing devices, so that real world content and virtual information content can be simultaneously embodied in the same picture and space, and natural interaction between a user and a virtual environment is realized.
The AR glasses move the imaging system to a place outside the glasses lens through optical imaging elements such as optical waveguides and the like, so that the imaging system is prevented from blocking external sight. The optical waveguide is a high-transmittance medium similar to an optical fiber for guiding light waves to propagate in the optical waveguide, light output by an imaging system and reflected light of a real scene are integrated and transmitted to human eyes, and hand image information acquired by a camera is processed and analyzed by using a computer vision algorithm, so that hand tracking and recognition can be realized.
Mixed Reality (MR), combining virtual information with a view of the real world, or adding a virtual representation of a real world object to a virtual environment.
Head-mounted devices in embodiments of the invention may include, but are not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, and the like.
According to the related art, various head-mounted devices may sense a direction of acceleration, angular acceleration, or inclination, and display a screen corresponding to the sensed information. The head mounted device may change and display the screen based on the user's movement.
It should be noted that, in the embodiment of the present invention, the first head-mounted device and the second head-mounted device may be the same head-mounted device (for example, both are AR glasses), or may be different head-mounted devices (for example, the first head-mounted device is AR glasses, and the second head-mounted device is a mobile phone), which is not limited in this embodiment of the present invention.
The virtual screen in the embodiment of the invention is a virtual reality screen, an augmented reality screen or a mixed reality screen of the head-mounted equipment.
The virtual screen in the embodiment of the present invention may be any carrier that can be used to display content projected by a projection device when content is displayed by using AR technology. The projection device may be a projection device using AR technology, such as a head-mounted device or an AR device in the embodiment of the present invention.
When displaying content on the virtual screen by using the AR technology, the projection device may project a virtual scene acquired by (or internally integrated with) the projection device, or a virtual scene and a real scene onto the virtual screen, so that the virtual screen may display the content, thereby showing an effect of superimposing the real scene and the virtual scene to a user.
In connection with different scenarios of AR technology applications, the virtual screen may generally be a display screen of an electronic device (e.g. a mobile phone), a lens of AR glasses, a windshield of a car, a wall of a room, etc. any possible carrier.
The following describes an exemplary process of displaying content on a virtual screen by using AR technology, by taking the virtual screen as a display screen of an electronic device, a lens of AR glasses, and a windshield of an automobile as examples.
In one example, when the virtual screen is a display screen of an electronic device, the projection device may be the electronic device. The electronic equipment can acquire a real scene in the area where the electronic equipment is located through the camera of the electronic equipment, the real scene is displayed on the display screen of the electronic equipment, then the electronic equipment can project a virtual scene acquired by the electronic equipment (or internally integrated) onto the display screen of the electronic equipment, so that the virtual scene can be displayed in a superposition mode in the real scene, and a user can see the effect of the real scene and the virtual scene after superposition through the display screen of the electronic equipment.
In another example, when the virtual screen is a lens of AR glasses, the projection device may be the AR glasses. When the user wears the glasses, the user can see the real scene in the area where the user is located through the lenses of the AR glasses, and the AR glasses can project the acquired (or internally integrated) virtual scene onto the lenses of the AR glasses, so that the user can see the display effect of the real scene and the virtual scene after superposition through the lenses of the AR glasses.
In yet another example, when the virtual screen is a windshield of an automobile, the projection device may be any electronic device. When the user is located in the automobile, the user can see the real scene in the area where the user is located through the windshield of the automobile, and the projection device can project the acquired (or internally integrated) virtual scene onto the windshield of the automobile, so that the user can see the display effect of the real scene and the virtual scene after superposition through the windshield of the automobile.
Of course, in the embodiment of the present invention, the specific form of the virtual screen may not be limited, for example, it may be a non-carrier real space. In this case, when the user is located in the real space, the user can directly see the real scene in the real space, and the projection device can project the acquired (or internally integrated) virtual scene into the real space, so that the user can see the display effect of the real scene and the virtual scene after superposition in the real space.
The virtual object in the embodiment of the present invention is an object in virtual information, and optionally, the virtual object is content displayed on a screen or a lens of the head-mounted device, which corresponds to the surrounding environment the user is viewing, but is not present as a physical embodiment outside the display.
The virtual object may be an AR object. It should be noted that the AR object may be understood as: the AR device analyzes the real object to obtain feature information of the real object (e.g., type information of the real object, appearance information of the real object (e.g., structure, color, shape, etc.), position information of the real object in space, etc.), and constructs an AR model in the AR device according to the feature information.
Optionally, in this embodiment of the present invention, the target virtual object may specifically be a virtual image, a virtual pattern, a virtual character, and the like.
The head-mounted device in the embodiment of the invention can be a head-mounted device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
An execution main body of the operation control method provided in the embodiment of the present invention may be the head-mounted device, or may also be a functional module and/or a functional entity capable of implementing the method in the head-mounted device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a head-mounted device as an example to exemplarily explain an operation control method provided by an embodiment of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an operation control method applied to a head-mounted device, which may include steps 101 to 102 described below.
Step 101, receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen.
Optionally, the first input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be set specifically according to an actual need, and the embodiment of the present invention is not limited. When the first input is executed, the first input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The first input may also be a first operation.
Alternatively, the first input may be an input to an arbitrary surface of the first virtual sub-object, or may be an input to an arbitrary position of the first virtual sub-object.
Illustratively, the first input may be a voice input, and the user may say "open the contents of the first virtual sub-object"; the first input may be a click input and the user may click on the first virtual sub-object.
102, responding to the first input, and executing a first control operation, wherein the first control operation is associated with the first input;
the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
Optionally, different objects are associated with different virtual sub-objects, where an object may include any information of any program, and may also include any type of target information.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
Optionally, the method further comprises: the N virtual sub-objects are separated by a separation identifier.
Optionally, the separation mark is an opaque line, where opaque means that the transparency of the first line is less than 100%, such as the separation mark 204 in fig. 2 (a); or the separation marks are voids having a certain width, etc.
Optionally, the first input comprises a first sub-input to the first virtual sub-object and a second sub-input to a second virtual sub-object; the first virtual sub-object is associated with a first object and the second virtual sub-object is associated with a second object;
the performing, in response to the first input, a first control operation includes:
merging the first virtual sub-object and the second virtual sub-object into a third virtual sub-object, and executing the first control operation on the first object and the second object;
wherein the first sub-input is used to select the first virtual sub-object and the second sub-input is used to select the second virtual sub-object. Therefore, the first virtual sub-object and the second virtual sub-object can be quickly combined through input of the first virtual sub-object and the second virtual sub-object, the first control operation is executed on the first object associated with the first virtual sub-object and the second object associated with the second virtual sub-object, and the operation is simple and convenient.
Optionally, the first sub-input or the second sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be specifically set according to an actual need, and the embodiment of the present invention is not limited. When the first sub-input or the second sub-input is executed, the first sub-input or the second sub-input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Optionally, the first virtual sub-object is associated with a first object, and the second virtual sub-object is associated with a second object, where the association relationship between the virtual sub-object and the object may be a default or may be set by a user.
Exemplarily, taking the first object and the second object as texts, as shown in (a) in fig. 2, the first virtual sub-object 2021 is associated with the first text "1. txt", the second virtual sub-object 2022 is associated with the second text "2. txt", the content of the first text "1. txt" is "a 123", and the content of the second text is "b 456", the user may click the first virtual sub-object 2021 first, click the second virtual sub-object 2022, and drag the first virtual sub-object 2021 to the second virtual sub-object 2022, merge the first virtual sub-object 2021 and the second virtual sub-object 2022 into the third virtual sub-object 2024, which is shown in (b) in fig. 2, and perform the first control operation on the first object and the second object, the first control operation is to merge the first text content and the second text content, which is shown as "a 123b 456".
For example, if the first object and the second object are images and the first virtual sub-object and the second virtual sub-object are merged into the third virtual sub-object, the first control operation may be to combine the first object and the second object into one image. The image can be conveniently and rapidly synthesized.
For example, if the first object is an image and the second object is a text, and the first virtual sub-object and the second virtual sub-object are combined into a third virtual sub-object, the first control operation may be to display the text on the image and then combine the text into one image.
For example, if the first object and the second object are application icons, and the first virtual sub-object and the second virtual sub-object are merged into a third virtual sub-object, the first control operation may be to store the first object and the second object in the third virtual sub-object. Grouping management of the application icons can be achieved.
For example, as shown in fig. 3(a), the user may select a first virtual sub-object with one hand and a second virtual sub-object with the other hand, and then draw the two hands together as shown in fig. 3 (b) until the existing parts of the two virtual sub-objects coincide, at which time the user releases his hands, and the two virtual sub-objects may be merged together as shown in fig. 3 (c). Of course, the operation may also be a one-handed operation, and the user may click the first virtual sub-object first and then click the second virtual sub-object, so that the two virtual sub-objects may be merged. The first control operation may also be performed on the first object and the second object.
Optionally, the first virtual sub-object is associated with a third object;
the performing, in response to the first input, a first control operation includes:
splitting the first virtual sub-object into at least two fourth virtual sub-objects, and executing the first control operation on the third object;
wherein different ones of the fourth virtual sub-objects are associated with different objects. Therefore, a user can quickly and conveniently split the first virtual sub-object into at least two fourth virtual sub-objects through first input of the first virtual sub-object, and execute first control operation on the third object associated with the first virtual sub-object, and the operation is simple and convenient.
Optionally, different objects are associated with different fourth virtual sub-objects, and an object associated with each virtual sub-object may be a default, may also be set by a user, and may be determined specifically according to an actual situation.
Exemplarily, taking the third object as text as an example, as shown in (b) of fig. 2, the first virtual sub-object 2024 is associated with the third object as "4. txt", the text content is "a 123b 456", the user can split the first virtual sub-object 2024 into at least two fourth virtual sub-objects by using a two-finger pinch-in operation, and can obtain that as shown in (a) of fig. 2, the first virtual sub-object is split into two fourth virtual sub-objects 2021 and 2022, and a first control operation is performed on the third object, the first control operation can be to split the text content of the third object into two parts, the fourth virtual sub-object 2021 is associated with object "1. txt", the fourth virtual sub-object 2022 is associated with object "2. txt", "1. txt" content is "a 123", and "2. txt" content is "b 456".
Optionally, as shown in fig. 4 (a), when the user splits the first virtual sub-object, the user may drop the thumb and the index finger on two opposite sides of the first virtual sub-object respectively, and draw the two fingers together and merge them, at this time, the device may recognize the gesture, and draw two points pointed by the user fingers on the two sides of the first virtual sub-object together along the merging direction of the user fingers, during the operation process, the two sides may have a deformed animation effect, and the user may visually see the operation process, as shown in fig. 4 (b), the two points may be finally merged, so that the first virtual sub-object is split into two independent virtual sub-objects. Two fourth virtual sub-objects may be obtained as shown in fig. 4 (c).
Optionally, the receiving a first input of the user to the first virtual sub-object of the virtual screen includes:
receiving a third sub-input of the first virtual sub-object and a fourth sub-input of a fifth virtual sub-object from a user, wherein the third sub-input is used for selecting the first virtual sub-object, and the fourth sub-input is used for selecting the fifth virtual sub-object;
the first area is associated with a fourth object, and the fifth virtual sub-object is associated with a fifth object;
the performing, in response to the first input, a first control operation comprising at least one of:
updating a fifth object associated with the fifth virtual sub-object to a fourth object associated with the first virtual sub-object;
updating the fourth object associated with the first virtual sub-object to a fifth object associated with the fifth virtual sub-object. Therefore, the user can conveniently and quickly update the first virtual sub-object or the object associated with the fifth virtual sub-object through the third sub-input of the first virtual sub-object and the fourth sub-input of the fifth virtual sub-object, and can exchange the objects associated with the first virtual sub-object and the fifth virtual sub-object, so that the operation is simple and convenient.
Optionally, the third sub-input or the fourth sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be specifically set according to an actual need, and the embodiment of the present invention is not limited. When the third sub-input or the fourth sub-input is executed, the single-point input may be performed, for example, a single finger is used for sliding input, clicking input, and the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The third sub-input may also be a third sub-operation and the fourth sub-input may also be a fourth sub-operation.
Exemplarily, taking the fourth object and the fifth object as texts, as shown in (a) in fig. 2, the first virtual sub-object 2021 is associated with the fourth object as "1. txt", the text content is "a 123", the fifth virtual sub-object 2022 is associated with the fifth object as "2. txt", the text content is "b 456", the user may click the first virtual sub-object 2021 first and then click the fifth virtual sub-object 2022, the user may also drag the first virtual sub-object to the fifth virtual sub-object, and in response to the user input, a first control operation is performed, where the first control operation may be to update the fifth object associated with the fifth virtual sub-object 2022 as the fourth object associated with the first virtual sub-object, that is, to update the fifth object associated with the fifth virtual sub-object 2022 as "1. txt", and the content is "a 123"; the first control operation may be to update the fourth object associated with the first virtual sub-object to the fifth object associated with the fifth virtual sub-object, that is, to update the fourth object associated with the first virtual sub-object to the fifth object "2. txt", with the content "b 456"; the fifth object associated with the fifth virtual sub-object may also be updated to the fourth object associated with the first virtual sub-object, while the fourth object associated with the first virtual sub-object is updated to the fifth object associated with the fifth virtual sub-object.
Optionally, the receiving a first input of a first virtual sub-object of a virtual object displayed on a virtual screen by a user includes:
receiving a fifth sub-input of a user to the first virtual sub-object and a sixth sub-input of a first real object, wherein the fifth sub-input is used for selecting the first virtual sub-object, and the sixth sub-input is used for selecting the first real object;
the performing, in response to the first input, a first control operation includes:
displaying the first virtual sub-object in a first real object area, and establishing an association relation between the first virtual sub-object and the first real object;
displaying the first virtual sub-object under the condition that the first real object is acquired by a camera of the equipment;
and the first object area is an area where the first object is located. In this way, the user displays the first virtual sub-object in the first real object area through the fifth sub-input to the first virtual sub-object and the sixth sub-input to the first real object, and establishes the association relationship between the first virtual sub-object and the first real object, so that the first virtual sub-object can be displayed when the first real object is detected to be located within the visual range of the electronic device. Alternatively, the first real object may be any object in the real world, such as a wall, a wardrobe, a door, a window, a curtain, and the like, which is not limited in this respect in this embodiment of the present invention.
Optionally, the fifth sub-input or the sixth sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be specifically set according to an actual need, and the embodiment of the present invention is not limited. When the fifth sub-input or the sixth sub-input is executed, the input may be a single-point input, such as a sliding input, a click input, and the like performed by using a single finger; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The fifth sub-input may be a fifth sub-operation and the sixth sub-input may be a sixth sub-operation. Illustratively, the user may click the first virtual sub-object first and then click the first real object, or drag the first virtual sub-object to the first real object.
Optionally, as shown in fig. 5 (a), the user drags the first virtual sub-object to the first real object, the user may place the palm of the hand at the same viewing angle range as the first virtual sub-object, and make a fist to grasp the first virtual sub-object, as shown in fig. 5 (b), at this time, the first virtual sub-object may move along with the user's hand, and may shrink or enlarge to the user's fist size, and the first virtual sub-object frame may remain highlighted, which prompts the user that the user has controlled the first virtual sub-object on the hand at this time; the user may then move the fist to a target location, such as the first physical area, and release the fist, i.e., the first virtual sub-object just controlled may be displayed in the first physical area.
For example, the user may drag the first virtual sub-object to the first real object as a door, display a first real object region on the door, and establish an association relationship between the first virtual sub-object and the door, and the head-mounted device stores information of the first real object and the first virtual sub-object, for example, stores spatial coordinates of the first virtual sub-object, surrounding environment information, and the like. Optionally, when the electronic device does not detect the first real object, the display of the first virtual sub-object is cancelled.
Optionally, before the receiving of the first input of the user to the first sub-object of the virtual object displayed on the virtual screen, the method further includes:
receiving a second input of the user;
in response to the second input, displaying a first virtual sub-object of a virtual object on a virtual screen, the first virtual sub-object being created based on the second input.
Optionally, the second input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which may be set specifically according to an actual need, and the embodiment of the present invention is not limited. When the second input is executed, the second input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. The second input may also be a second operation.
For example, as shown in fig. 6 (a), the user may put four fingers, such as two thumbs and an index finger, close together and then spread out in four directions, so that the device may recognize the fingertip positions of the four fingers in real time, such as the fingertip positions of the two thumbs and the index finger, and when the user stays at the target position for a period of time longer than a preset threshold, a first virtual sub-object of one virtual object may be established at the positions of the four points, resulting in that as shown in fig. 6 (b), the first virtual sub-object may be a two-dimensional object, which may be long in the distance between the corresponding fingers of the left and right hands and wide in the distance between the two fingertips of each hand. Optionally, when the first virtual sub-object created according to the user's finger is an irregular shape, the device may further optimize the first virtual sub-object, such as adjusting it to a regular shape.
Optionally, said performing a first control operation in response to said first input comprises: updating the display size of the first virtual child object. In this way, the size of the first virtual sub-object can be adjusted according to the user input.
Illustratively, as shown in fig. 7 (a), the user may pinch two opposite corners of the first virtual sub-object with fingers, or may pinch one corner with fingers of one hand. The user can adjust the size of the first virtual sub-object in a single direction, for example, the length or the width of the first virtual sub-object is stretched in a transverse direction or a longitudinal direction, and the user only needs to pinch a certain edge of the first virtual sub-object in the same manner and then stretch the edge outwards. As shown in (b) of fig. 7.
Optionally, updating the display size of the first virtual sub-object may trigger execution of a control operation on an object associated with the first virtual sub-object. Therefore, the display state of the first virtual object can be updated, and the object associated with the first virtual sub-object can be conveniently and quickly controlled and operated.
Optionally, said performing a first control operation in response to said first input comprises: rotating the first virtual sub-object about a target axis. In this way, the first virtual sub-object can be conveniently and quickly rotated according to the input of the user.
Alternatively, the target axis may be in any direction, e.g., the target axis may be parallel to the screen, may be at an angle to the screen, e.g., perpendicular to the screen, at an angle of 60 degrees to the screen, etc.
Optionally, the first virtual sub-object is rotated around the target axis, and a control operation may be performed on an object associated with the first virtual sub-object, so that the control operation may be performed on the object associated with the first virtual sub-object conveniently and quickly by updating the display state of the first virtual object.
Optionally, rotating the first virtual sub-object about the target axis may trigger execution of a control operation on an object associated with the first virtual object. For example, as shown in fig. 8 (a), the user may place the palm on the area where the first virtual sub-object is located, and the palm may rotate within the screen, and if the palm rotates clockwise within the screen by a certain angle, the first virtual sub-object may rotate clockwise within the screen by a certain angle.
For example, as shown in fig. 8 (b), the user needs to place the palm of the hand at a position corresponding to the first virtual sub-object, and the user turns the palm of the hand, that is, the first virtual sub-object can be turned to the reverse side.
Optionally, before the receiving of the first input of the user to the first virtual sub-object of the virtual object displayed on the virtual screen, the method further includes:
displaying M identifiers in a second display area of the virtual object, wherein each identifier indicates a different object;
the M identifications comprise a first identification, the first identification indicates a target object, and M is a positive integer. Therefore, the mark of the indication object is displayed in the second display area, and the user can conveniently operate the mark subsequently.
Alternatively, the positional relationship between the second display region and the first display region is not particularly limited.
Optionally, different objects are included in the second display area, the objects may include, but are not limited to, text, audio-video files, application programs, images, and the like, and the objects may be any information of any program. These objects are displayed in the second display area by corresponding indicia, which may be icons, symbols, etc., each indicating a different object.
Illustratively, as shown in fig. 9, the first display area 202 of the virtual object 201 includes N virtual sub-objects, a first sub-area 2021 is associated with the object a, a second sub-area 2022 is associated with the object B, and a third sub-area 2023 is associated with the object C, and it is understood that other sub-areas are associated with different objects, which is not limited to this example; the second display area 203 of the virtual object 201 displays M identifiers, each identifier indicating a different object, wherein the identifier 2031 indicates the identifier 2.
Alternatively, the user may slide the logo left or right to view logos that are not displayed due to the spatial limitations of the second display area. Alternatively, the user may change the number of identifiers displayed in the second display area by adjusting the size of the second display area.
Optionally, the first input is used to display the first identifier to an area where the first virtual sub-object area is located.
Optionally, the receiving a first input of a first virtual sub-object of a virtual object displayed on a virtual screen by a user includes:
and receiving a seventh sub-input of the first identifier and an eighth sub-input of the first virtual sub-object by the user, wherein the seventh sub-input is used for selecting the first identifier, and the eighth sub-input is used for selecting the first virtual sub-object.
Optionally, the seventh sub-input or the eighth sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The seventh sub-input may also be a seventh sub-operation, and the eighth sub-input may also be an eighth sub-operation. When the seventh sub-input or the eighth sub-input is executed, the input may be a single-point input, such as a sliding input, a click input, and the like performed by using a single finger; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously. Illustratively, the user may click the first identifier to select the first identifier, and then click the first virtual sub-object to select the first virtual sub-object; the user may also drag the first icon to the first virtual sub-object.
Illustratively, the user points a finger to an area where the first identifier is located on the virtual screen, and drags the first identifier to the area where the first virtual child object is located.
Optionally, the pointing of the finger by the user to the area of the first identifier on the virtual screen may include, but is not limited to, placing the finger by the user on the area of the first identifier on the virtual screen, or pointing the finger of the user to the area of the first identifier on the virtual screen, that is, the finger of the user is not in the area of the first identifier, but points to the area of the first identifier, and has a certain distance from the area of the first identifier.
Optionally, dragging the first identifier to the area where the first virtual sub-object is located may include, but is not limited to, a user dragging the first identifier to the area where the first virtual sub-object is located on the virtual screen, or a user dragging the first identifier to the area corresponding to the first virtual sub-object on the virtual screen, that is, when the first identifier is dragged to the area corresponding to the first virtual sub-object, a projection of the first identifier in the plane where the virtual object is located in the area where the first virtual sub-object is located. For example dragging the first identification to the area directly in front of the first virtual sub-object, which refers to the direction closer to the user.
Optionally, the receiving a first input of the user to the first virtual sub-object of the virtual screen includes:
receiving a ninth sub-input of the first identifier by the user, wherein the ninth sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, and the position where the hand is located is an area where the first virtual sub-object is located;
the performing a first control operation includes:
executing a first control operation under the condition that a first preset condition is met;
wherein, the meeting of the first preset condition comprises: and the hand of the user stays in the area of the first virtual sub-object for a first preset time, or receives a third input of the user. Therefore, the user can execute the first control operation through the operation on the first identification, and the operation is simple and quick.
Optionally, the ninth sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The ninth sub-input may also be a ninth sub-operation. When the ninth sub-input is executed, the ninth sub-input may be a single-point input, such as a single finger is used for sliding input, clicking input, and the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
For example, the ninth sub-input may be a gesture of the user, for example, the hand of the user may point to the area where the first identifier is located, and a gesture of taking the first identifier is made, then the first identifier follows the movement of the hand of the user, where the hand of the user moves, where the first identifier moves, where the hand of the user moves to the target virtual sub-object, and the first identifier moves to the first virtual sub-object.
Optionally, the moving of the hand of the user to the area where the first virtual sub-object is located may include, but is not limited to, moving of the hand of the user to the area where the first virtual sub-object is located on the virtual screen, or moving of the hand of the user to the area corresponding to the first virtual sub-object on the virtual screen, that is, when the hand of the user moves to the area corresponding to the first virtual sub-object, a projection of the hand of the user in a plane where the virtual object is located within the first virtual sub-object. For example, the user's hand moves to an area directly in front of the first virtual sub-object, which refers to a direction closer to the user.
Optionally, the moving of the first identifier to the area where the first virtual sub-object is located may include, but is not limited to, moving the first identifier to the area where the first virtual sub-object is located on the virtual screen, or moving the first identifier to the area corresponding to the first virtual sub-object on the virtual screen, that is, when the first identifier is moved to the area corresponding to the first virtual sub-object, a projection of the first identifier in a plane where the virtual object is located within the first virtual sub-object. For example, the first logo moves to an area directly in front of the first virtual sub-object, the directly front referring to a direction closer to the user.
Optionally, the third input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The third input may also be a third operation. When the third input is executed, the third input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the third input may be a gesture of the user, such as the user throwing the first identifier toward the first virtual sub-object, or the user placing the first identifier on the first virtual sub-object, or the user extending a finger, etc.
Optionally, the user places the first identifier in the area where the first virtual sub-object is located, which may include but is not limited to the user placing the first identifier in the area where the first virtual sub-object is located on the virtual screen, or the user placing the first identifier in the area corresponding to the first virtual sub-object on the virtual screen, that is, when the first identifier is placed in the area corresponding to the first virtual sub-object, a projection of the first identifier in the plane where the virtual object is located in the first virtual sub-object. For example, the first marker is placed in the area directly in front of the first virtual sub-object, the direct front referring to the direction closer to the user.
Illustratively, the hand of the user points to the area where the first identifier is located, and a gesture of taking the first identifier is made, then the first identifier moves along with the hand of the user, the hand of the user moves to the area directly in front of the first virtual sub-object, the first identifier moves to the area directly in front of the first virtual sub-object, the hand of the user stays in the area directly in front of the first virtual sub-object for a first preset time, for example, stays for 3 seconds, and then the first control operation may be executed.
Illustratively, a hand of a user is placed in an area where the first identifier is located, and a gesture for grabbing the first identifier is made, then the first identifier moves along with the hand of the user, the hand of the user moves to the first virtual sub-object, and the first identifier moves to the first virtual sub-object, as shown in fig. 9, the user places the first identifier 2031 on the first virtual sub-object 2021, the first virtual sub-object 2021 is highlighted, and the user releases the first identifier 2031 by releasing his hand, so that the first control operation can be performed on the first identifier itself or any object related to the first identifier.
Optionally, the method further comprises:
step a, receiving a fourth input of the first identifier by a user;
optionally, the fourth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fourth input may also be a fourth operation. When the fourth input is executed, the fourth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Step b, responding to the fourth input, and displaying the target object content indicated by the first identification;
for example, after the user takes the first identifier out of the first virtual sub-object with a finger and then releases the finger, the head-mounted device recognizes the gesture, and the target object content indicated by the first identifier may be displayed on the virtual screen of the head-mounted device.
Illustratively, after the user takes the first identifier out of the first virtual sub-object with a finger and then releases the finger, and the head-mounted device recognizes the gesture motion, the first identifier indicates an application program, and the application program may be opened, and a target interface of the application program may be displayed, which may be a main interface, a shortcut interface, a function interface, and the like of the application program.
Step c, receiving a fifth input of the first identifier by the user;
optionally, the fifth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fifth input may also be a fifth operation. When the fifth input is executed, the fifth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
And d, responding to the fifth input, and canceling the display of the target object content indicated by the first identification.
Illustratively, the fifth input is a gesture of the user changing the hand from open to fist-making state, and the target object content indicated by the first identifier may be cancelled from being displayed.
Illustratively, the fifth input is a gesture by which the user changes the hand from open to fist-closed, which may close the application.
Therefore, the target object content indicated by the first identification can be displayed or the display operation can be cancelled through the input of the user, and the method is simple and quick.
Optionally, the fourth input in step a comprises:
a tenth sub-input to move the first identifier from the second display area to a first spatial position, and an eleventh sub-input to trigger display of target object content indicated by the first identifier.
Optionally, the tenth sub-input or the eleventh sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The tenth sub-input may be a tenth sub-operation, and the eleventh sub-input may also be an eleventh sub-operation. When the tenth sub-input or the eleventh sub-input is executed, the input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Optionally, the tenth sub-input comprises a first gesture, the eleventh sub-input comprises a second gesture, the first gesture is a gesture that the user takes the first identifier out of the first virtual sub-object with a finger and moves the first identifier to the first space region, and the second gesture is a gesture that the user releases the finger that is taking the first identifier.
Optionally, the plane of the first space region may be the same plane as the plane of the virtual object, or may be a plane different from the plane of the virtual object, for example, the plane of the first space region is parallel to the plane of the virtual object, and the plane of the first space region is located in front of the plane of the virtual object, where the front is a direction closer to the user.
Optionally, after the step a receives a fourth input from the user, the method further includes:
and e, detecting the position of the hand of the user by the head-mounted equipment.
Step f, controlling the first identifier to move along with the hand of the user;
illustratively, in step d, a fifth input of the user is received, the user changes the hand from open to fist-making state, and in response to the fifth input, the target object content indicated by the first identifier is cancelled and displayed, and the first identifier is attached to the hand of the user, and continues to follow the movement of the hand of the user, where the hand of the user moves, and where the first identifier moves.
Step g, executing a second control operation when the hand of the user moves to a sixth virtual sub-object of the N virtual sub-objects and meets a second preset condition; therefore, the first identification can move along with the hand of the user, second control operation can be executed under the condition that a second preset condition is met, and the method is simple and convenient.
Wherein, the meeting of the second preset condition comprises: and the hand of the user stays in the sixth virtual sub-object for a second preset time, or receives a sixth input of the user.
Optionally, the sixth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The sixth input may also be a sixth operation. When the fifth input is executed, the fifth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Optionally, the moving of the user's hand to the second virtual sub-object of the N virtual sub-objects may include, but is not limited to, moving the user's hand to the second virtual sub-object on the virtual screen, or moving the user's hand to an area corresponding to the second virtual sub-object on the virtual screen, for example, moving the user's hand to an area directly in front of the second virtual sub-object, where the directly in front refers to a direction closer to the user.
Illustratively, the user's hand moves to a region directly in front of the second virtual sub-object, the first identifier moves to a region directly in front of the second virtual sub-object, and the user's hand stays in the region directly in front of the second virtual sub-object for a second preset time period, such as 3 seconds or 2 seconds, then the second control operation is executed.
Illustratively, the hand of the user moves to the second virtual sub-object, the first identifier moves to the second virtual sub-object, and the user stretches out a finger, then the second control operation can be executed.
Optionally, the method of claim 1, wherein the second display area of the virtual screen further comprises a first virtual control;
before the receiving of the first input of the user to the first virtual sub-object of the virtual screen, the method further includes:
receiving a seventh input of the first virtual control by the user;
in response to the seventh input, moving the first virtual control from the second display area to a second spatial position and performing a third control operation;
in the case of receiving an eighth input by the user, outputting the first identifier;
wherein the third control operation is an operation that generates the first identifier. Therefore, the first identification can be output according to the input of the user, and the operation is simple and convenient.
Optionally, the first virtual control indicates any functionality of any program.
Optionally, the seventh input or the eighth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The seventh input may be a seventh operation and the eighth input may be an eighth operation. When the seventh input or the eighth input is executed, the input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the seventh input is a gesture in which the user's finger points to the first virtual control, and then the user points the finger to the second spatial region, then in response to the seventh input, the first virtual control follows the user's finger, moves from the second display region to the second spatial region, and performs a third control operation.
Illustratively, the seventh input is a gesture by which the user grasps the first virtual control, moves the first virtual control to the second region of space, and releases the finger to release the first virtual control.
Optionally, the plane of the second space region may be the same plane as the plane of the virtual object, or may be a plane different from the plane of the virtual object, for example, the plane of the second space region is parallel to the plane of the virtual object, and the plane of the second space region is located in front of the plane of the virtual object, where the front is a direction closer to the user.
Optionally, the head mounted device comprises a camera;
before step 101 receives a first input of a user to a first virtual sub-object of a virtual screen, the method further includes:
acquiring an image acquired by a camera;
and displaying a virtual object on a virtual screen under the condition that the target real object is included in the image.
Optionally, the camera captures an image in a real environment, where the real environment is within a viewing angle range of the user.
Optionally, in a case that the image acquired by the camera in real time does not include the target real object, that is, the user's sight line is away from the target real object, the display of the virtual object is cancelled on the virtual screen, and when the user's sight line returns to the target real object again, the virtual object is displayed on the virtual screen.
Optionally, the first target area is the same as the area where the target object is located, or the first target area is a part of the area where the target object is located, or the first target area includes the area where the target object is located, or the first target area is adjacent to the area where the target object is located, for example, the first target area is located in front of, above, or the like the area where the target object is located.
Optionally, the case that the image includes the target real object includes: the target object appears in the image or the image comprises the target object, and the environment around the target object is the target environment. For example, the target object is a sofa, the target environment is that a tea table is arranged 0.5 m in front of the sofa, a television is arranged 1 m in front of the tea table, and a water dispenser is arranged 0.3 m in the left side of the sofa.
In the case that the image in the real environment captured by the camera includes the target real object, the virtual object is displayed in the first target area of the virtual screen, and illustratively, the image in the real environment captured by the camera includes the table, and then the virtual object is displayed in the first target area of the virtual screen, where the first target area is located on the upper surface of the table, or the first target area is located directly above the upper surface of the table.
In the embodiment of the invention, the virtual object is displayed in the first target area of the virtual screen by acquiring the image acquired by the camera under the condition that the image comprises the target object, so that the virtual object can be displayed when the visual angle of a user returns to the target area.
Optionally, the target area of the virtual screen comprises a second identifier;
optionally, the second identifier is used to indicate the virtual object.
Before the image of acquireing the camera collection, still include:
receiving a ninth input of the user to the second identification and the third spatial position;
optionally, the ninth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The ninth input may also be a ninth operation. When the ninth input is executed, the input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
And responding to the eighth input, and displaying a virtual object in an area corresponding to the third spatial position on the virtual screen, wherein the third spatial position is the position of the target real object.
Optionally, the third target region is the same as the third spatial region, or the third target region is a part of the third spatial region, or the third target region includes the third spatial region, or the third target region is adjacent to the third spatial region, for example, the third target region is located in front of, above, or the like the third spatial region.
Illustratively, as shown in fig. 10(a), the second identifier 501 is located in a second target area 502 of the virtual screen, a ninth input of the user to the second identifier 501 and a third spatial area 503 is received, for example, the second identifier is dragged to the third spatial area, the target object is a wall, and as shown in fig. 10(b), the virtual object 201 is displayed in the third target area, which is a part of the third spatial area 503, and the user may continue to resize the virtual object by using a finger.
The head-mounted device stores information of a virtual object with a set area and an adjusted size, for example, space coordinates and surrounding environment information of the virtual object are stored, the virtual object is on one wall, the right half of the wall comprises a door, the left side of the wall is vertically connected with the other wall comprising a window, image information of the surrounding environment of the target object can also be stored, when the visual angle of a user returns to the area where the target object is located, the virtual object is displayed, further illustratively, when the visual angle of the user falls in a third space area, the camera collects an image in the real environment, the collected image is compared with the previously stored image of the surrounding environment of the target object, and the virtual object is displayed in the third target area under the condition that the target object, the position information of the surrounding environment and the image information are all matched.
Optionally, the second identifier is always displayed on a virtual screen of the head-mounted device, that is, the user may see the second identifier at any time, the user may drag the second identifier to any one or more spatial regions, the head-mounted device may record spatial coordinates of the virtual object, and the user may see the virtual object in the plurality of spatial regions.
In the embodiment of the present invention, by receiving a ninth input of the user to the second identifier and the third spatial region, and in response to the ninth input, displaying the virtual object in the third target region corresponding to the third spatial region on the virtual screen, it is possible to implement that the user can place the virtual object in a plurality of spatial regions through a simple input, and the user can see the virtual object in the plurality of spatial regions.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
As shown in fig. 11, an embodiment of the present invention provides a head-mounted device 120, where the head-mounted device 120 includes:
a first receiving module 121, configured to receive a first input of a first virtual sub-object of a virtual object displayed on a virtual screen from a user;
a first processing module 122, configured to perform a first control operation in response to the first input, the first control operation being associated with the first input;
the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
Optionally, the first input comprises a first sub-input to the first virtual sub-object and a second sub-input to a second virtual sub-object; the first virtual sub-object is associated with a first object and the second virtual sub-object is associated with a second object; the first processing module 122 is specifically configured to merge the first virtual sub-object and the second virtual sub-object into a third virtual sub-object, and execute the first control operation on the first object and the second object; wherein the first sub-input is used to select the first virtual sub-object and the second sub-input is used to select the second virtual sub-object.
Optionally, the first virtual sub-object is associated with a third object; the first processing module 122 is specifically configured to split the first virtual sub-object into at least two fourth virtual sub-objects, and execute the first control operation on the third object; wherein different ones of the fourth virtual sub-objects are associated with different objects.
Optionally, the first receiving module 121 is specifically configured to receive a third sub-input of the user to the first virtual sub-object and a fourth sub-input of the user to a fifth virtual sub-object, where the third sub-input is used to select the first virtual sub-object, and the fourth sub-input is used to select the fifth virtual sub-object; the first area is associated with a fourth object, and the fifth virtual sub-object is associated with a fifth object; the first processing module 122 is specifically configured to perform at least one of the following: updating a fifth object associated with the fifth virtual sub-object to a fourth object associated with the first virtual sub-object; updating the fourth object associated with the first virtual sub-object to a fifth object associated with the fifth virtual sub-object.
Optionally, the first receiving module 121 is specifically configured to receive a fifth sub-input of the user to the first virtual sub-object and a sixth sub-input of the user to the first real object, where the fifth sub-input is used to select the first virtual sub-object, and the sixth sub-input is used to select the first real object; the first processing module 122 is specifically configured to display the first virtual sub-object in a first physical area, and establish an association relationship between the first virtual sub-object and the first physical area; the head-mounted device further comprises: the first display module is used for displaying the first virtual sub-object under the condition that the camera of the equipment acquires the first real object; and the first object area is an area where the first object is located.
Optionally, the head-mounted device further comprises: the second receiving module is used for receiving a second input of the user; a second display module for displaying a first virtual sub-object of a virtual object on a virtual screen in response to the second input, the first virtual sub-object being created based on the second input.
Optionally, the head-mounted device further comprises: and the updating module is used for updating the display size of the first virtual sub-object.
Optionally, the head-mounted device further comprises: and the rotating module is used for rotating the first virtual sub-object around a target axis.
Optionally, the head-mounted device further comprises: and the third display module is used for displaying M identifications in a second display area of the virtual object, and each identification indicates a different object.
Optionally, the first input is used to display the first identifier to an area where the first virtual sub-object is located.
Optionally, the first receiving module 121 is specifically configured to receive a seventh sub-input of the first identifier and an eighth sub-input of the first virtual sub-object by the user, where the seventh sub-input is used to select the first identifier, and the eighth sub-input is used to select the first virtual sub-object.
Optionally, the first receiving module 121 is specifically configured to receive a ninth sub-input of the first identifier from the user, where the ninth sub-input is used to control the first identifier to move to a position of the hand along with the hand of the user, and the position of the hand is an area where the first virtual sub-object is located; the first processing module 122 is specifically configured to execute a first control operation when a first preset condition is met; wherein, the meeting of the first preset condition comprises: and the hand of the user stays in the area of the first virtual sub-object for a first preset time, or receives a third input of the user.
Optionally, the head-mounted device further comprises: the third receiving module is used for receiving fourth input of the first identifier by the user; a fourth display module, configured to display, in response to the fourth input, the target object content indicated by the first identifier; the fourth receiving module is used for receiving a fifth input of the first identifier by the user; and the second processing module is used for responding to the fifth input and canceling the display of the target object content indicated by the first identification.
Optionally, the fourth input comprises: a tenth sub-input to move the first identifier from the second display area to a first spatial position, and an eleventh sub-input to trigger display of target object content indicated by the first identifier.
Optionally, the head-mounted device further comprises: the third processing module is used for controlling the first identifier to follow the hand movement of the user; the fourth processing module is used for executing a second control operation when the hand of the user moves to a sixth virtual sub-object of the N virtual sub-objects and meets a second preset condition; wherein, the meeting of the second preset condition comprises: and the hand of the user stays in the sixth virtual sub-object for a second preset time, or receives a sixth input of the user.
Optionally, the second display area of the virtual screen further includes a first virtual control; the head-mounted device further comprises: a fifth receiving module, configured to receive a seventh input to the first virtual control from the user; a fifth processing module, configured to move the first virtual control from the second display area to a second spatial location and perform a third control operation in response to the seventh input; the output module is used for outputting the first identifier under the condition that an eighth input of a user is received; wherein the third control operation is an operation that generates the first identifier.
Optionally, the head mounted device comprises a camera; the head-mounted device further comprises: the acquisition module is used for acquiring images acquired by the camera; and the fifth display module is used for displaying the virtual object on the virtual screen under the condition that the target real object is included in the image.
Optionally, the target area of the virtual screen comprises a second identifier; the head-mounted device further comprises: a sixth receiving module, configured to receive an eighth input of the second identifier and the third spatial location from the user; and the sixth display module is used for responding to the eighth input, displaying a virtual object in an area corresponding to the third spatial position on a virtual screen, wherein the third spatial position is the position of the target real object.
Optionally, the N virtual sub-objects are separated by a separation identifier.
The head-mounted device provided by the embodiment of the present invention can implement each process implemented by the head-mounted device in the above method embodiments, and is not described herein again to avoid repetition.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
Fig. 12 is a schematic diagram of a hardware structure of a head-mounted device for implementing various embodiments of the present invention, and as shown in fig. 12, the head-mounted device 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the configuration of the head-mounted device shown in fig. 12 does not constitute a limitation of the head-mounted device, and that the head-mounted device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In embodiments of the present invention, the head-mounted device includes, but is not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, and the like.
Wherein the user input unit 707 is configured to receive a first input by a user to a first virtual sub-object of a virtual object displayed on the virtual screen; a processor 710 for performing a first control operation in response to the first input, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
The embodiment of the invention provides a head-mounted device, which can receive a first input of a first virtual sub-object of a virtual object displayed on a virtual screen from a user; in response to the first input, performing a first control operation, the first control operation being associated with the first input; the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The head-mounted device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the head-mounted device 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The head-mounted device 700 also includes at least one sensor 705, such as a gesture sensor, a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the head-mounted device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of a head-mounted device (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), and identify related functions of vibration (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The display unit 706 may include a hologram device, which may form a three-dimensional (3D) image (hologram) in the air by using light interference, a projector (not shown in the drawings). The projector may display an image by projecting light onto a screen. The screen may be located inside or outside the head-mounted device.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the head-mounted device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the head-mounted device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the head-mounted device, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the head-mounted apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the headset 700 or may be used to transmit data between the headset 700 and an external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the head-mounted device, connects various parts of the whole head-mounted device by using various interfaces and lines, and performs various functions of the head-mounted device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby performing overall monitoring of the head-mounted device. Processor 710 may include one or more processing units; alternatively, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710, and that the processor 710 may detect a user's gesture and determine a control command corresponding to the gesture in accordance with embodiments of the present invention.
The head-mounted device 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and optionally, the power supply 711 may be logically coupled to the processor 710 via a power management system to implement functions such as managing charging, discharging, and power consumption via the power management system.
In addition, the head-mounted device 700 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides a head-mounted device, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program, when executed by the processor 710, implements each process of the operation control method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the operation control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a head-mounted device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (28)

1. An operation control method applied to a head-mounted device, the method comprising:
receiving a first input of a user to a first virtual sub-object of a virtual object displayed on a virtual screen;
in response to the first input, performing a first control operation, the first control operation being associated with the first input;
the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different objects, and N is a positive integer;
the head-mounted device comprises a camera;
before the receiving of the first input of the user to the first virtual sub-object of the virtual screen, the method further includes:
acquiring an image acquired by a camera;
displaying a virtual object on a virtual screen under the condition that the target real object is included in the image;
the virtual object is an AR model which is constructed in the head-mounted equipment according to the characteristic information of the real object obtained by analyzing the real object by the head-mounted equipment.
2. The method of claim 1, wherein the first sub-input of the first virtual sub-object and the second sub-input to the second virtual sub-object; the first virtual sub-object is associated with a first object and the second virtual sub-object is associated with a second object;
the performing a first control operation in response to the first input comprises:
merging the first virtual sub-object and the second virtual sub-object into a third virtual sub-object, and executing the first control operation on the first object and the second object;
wherein the first sub-input is used to select the first virtual sub-object and the second sub-input is used to select the second virtual sub-object.
3. The method of claim 1, wherein the first virtual sub-object is associated with a third object;
the performing a first control operation in response to the first input comprises:
splitting the first virtual sub-object into at least two fourth virtual sub-objects, and executing the first control operation on the third object;
wherein different ones of the fourth virtual sub-objects are associated with different objects.
4. The method of claim 1, wherein receiving a first input from a user into a first virtual sub-object of a virtual screen comprises:
receiving a third sub-input of the user to the virtual sub-object and a fourth sub-input of a fifth virtual sub-object, wherein the third sub-input is used for selecting the first virtual sub-object, and the fourth sub-input is used for selecting the fifth virtual sub-object;
the first virtual sub-object is associated with a fourth object, and the fifth virtual sub-object is associated with a fifth object;
the performing, in response to the first input, a first control operation comprising at least one of:
updating a fifth object associated with the fifth virtual sub-object to a fourth object associated with the first virtual sub-object;
updating the fourth object associated with the first virtual sub-object to a fifth object associated with the fifth virtual sub-object.
5. The method of claim 1, wherein receiving a first input from a user into a first virtual sub-object of a virtual object displayed on a virtual screen comprises:
receiving a fifth sub-input of a user to the first virtual sub-object and a sixth sub-input of a first real object, wherein the fifth sub-input is used for selecting the first virtual sub-object, and the sixth sub-input is used for selecting the first real object;
the performing, in response to the first input, a first control operation includes:
displaying the first virtual sub-object in a first real object area, and establishing an association relation between the first virtual sub-object and the first real object;
displaying the first virtual sub-object under the condition that the first real object is acquired by a camera of the equipment;
and the first object area is an area where the first object is located.
6. The method of claim 1, wherein prior to receiving a first input by a user into a first virtual sub-object of a virtual object displayed on a virtual screen, further comprising:
receiving a second input of the user;
in response to the second input, displaying a first virtual sub-object of a virtual object on a virtual screen, the first virtual sub-object being created based on the second input.
7. The method of claim 1, wherein the performing a first control operation comprises:
updating the display size of the first virtual child object.
8. The method of claim 1, wherein the performing a first control operation comprises:
rotating the first virtual sub-object about a target axis.
9. The method of claim 1, wherein prior to receiving a first input by a user into a first virtual sub-object of a virtual object displayed on a virtual screen, further comprising:
displaying M identifiers in a second display area of the virtual object, wherein each identifier indicates a different object;
the M identifications comprise a first identification, the first identification indicates a target object, and M is a positive integer.
10. The method of claim 9, wherein the first input is used to display the first identifier to an area where a first virtual sub-object is located.
11. The method of claim 10, wherein receiving a first input from a user into a first virtual sub-object of a virtual object displayed on a virtual screen comprises:
and receiving a seventh sub-input of the first identifier and an eighth sub-input of the first virtual sub-object by the user, wherein the seventh sub-input is used for selecting the first identifier, and the eighth sub-input is used for selecting the first virtual sub-object.
12. The method of claim 10, wherein receiving a first input from a user into a first virtual sub-object of a virtual screen comprises:
receiving a ninth sub-input of the first identifier by the user, wherein the ninth sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, and the position where the hand is located is an area where the first virtual sub-object is located;
the performing a first control operation includes:
executing a first control operation under the condition that a first preset condition is met;
wherein, the meeting of the first preset condition comprises: and the hand of the user stays in the area where the virtual sub-object is located for a first preset time, or receives a third input of the user.
13. The method of claim 9, further comprising: (not commented on inventive) receiving a fourth input by the user to the first identification;
in response to the fourth input, displaying the target object content indicated by the first identification;
receiving a fifth input of the first identifier by the user;
canceling the display of the target object content indicated by the first identifier in response to the fifth input.
14. The method of claim 13, wherein the fourth input comprises:
a tenth sub-input to move the first identifier from the second display area to a first spatial position, and an eleventh sub-input to trigger display of target object content indicated by the first identifier.
15. The method of claim 13, wherein after receiving the fourth input from the user, further comprising:
controlling the first identifier to follow hand movement of the user;
executing a second control operation when the hand of the user moves to a sixth virtual sub-object of the N virtual sub-objects and a second preset condition is met;
wherein, the meeting of the second preset condition comprises: and the hand of the user stays in the sixth virtual sub-object for a second preset time, or receives a sixth input of the user.
16. The method of claim 9, further comprising a first virtual control in the second display area of the virtual screen;
before the receiving of the first input of the user to the first virtual sub-object of the virtual screen, the method further includes:
receiving a seventh input of the first virtual control by the user;
in response to the seventh input, moving the first virtual control from the second display area to a second spatial position and performing a third control operation;
in the case of receiving an eighth input by the user, outputting the first identifier;
wherein the third control operation is an operation that generates the first identifier.
17. The method of claim 1, wherein the target area of the virtual screen comprises a second identifier;
before the image of acquireing the camera collection, still include:
receiving a ninth input of the user to the second identification and the third spatial position;
and responding to the ninth input, and displaying a virtual object in an area corresponding to the third spatial position on the virtual screen, wherein the third spatial position is the position of the target real object.
18. The method of claim 1, wherein the N virtual sub-objects are separated by a separation identifier.
19. A head-mounted device, comprising:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of a user on a first virtual sub-object of a virtual object displayed on a virtual screen;
a first processing module for performing a first control operation in response to the first input, the first control operation being associated with the first input;
the virtual object comprises a first display area, the first display area comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, different virtual sub-objects are associated with different target objects, and N is a positive integer;
the head-mounted device comprises a camera;
before the receiving of the first input of the user to the first virtual sub-object of the virtual screen, the method further includes:
acquiring an image acquired by a camera;
displaying a virtual object on a virtual screen under the condition that the target real object is included in the image;
the virtual object is an AR model which is constructed in the head-mounted equipment according to the characteristic information of the real object obtained by analyzing the real object by the head-mounted equipment.
20. The head-mounted device of claim 19, wherein the first input comprises a first sub-input to the first virtual sub-object and a second sub-input to a second virtual sub-object; the first virtual sub-object is associated with a first object and the second virtual sub-object is associated with a second object;
the first processing module is specifically configured to merge the first virtual sub-object and the second virtual sub-object into a third virtual sub-object, and execute the first control operation on the first object and the second object;
wherein the first sub-input is used to select the first virtual sub-object and the second sub-input is used to select the second virtual sub-object.
21. The head-mounted device of claim 19, wherein the first virtual sub-object is associated with a third object;
the first processing module is specifically configured to split the first virtual sub-object into at least two fourth virtual sub-objects, and execute the first control operation on the third object;
wherein different ones of the fourth virtual sub-objects are associated with different objects.
22. The head-mounted device of claim 19, wherein the first receiving module is specifically configured to receive a third sub-input of the first virtual sub-object and a fourth sub-input of a fifth virtual sub-object from a user, wherein the third sub-input is used for selecting the first virtual sub-object, and the fourth sub-input is used for selecting the fifth virtual sub-object;
the first virtual sub-object is associated with a fourth object, and the fifth virtual sub-object is associated with a fifth object;
the first processing module is specifically configured to execute at least one of the following:
updating a fifth object associated with the fifth virtual sub-object to a fourth object associated with the first virtual sub-object;
updating the fourth object associated with the first virtual sub-object to a fifth object associated with the fifth virtual sub-object.
23. The head-mounted device of claim 19, wherein the first receiving module is specifically configured to receive a fifth sub-input of the user to the first virtual sub-object and a sixth sub-input of the user to the first real object, wherein the fifth sub-input is used to select the first virtual sub-object, and the sixth sub-input is used to select the first real object;
the first processing module is specifically configured to display the first virtual sub-object in a first real object area, and establish an association relationship between the first virtual sub-object and the first real object;
the head-mounted device further comprises:
the first display module is used for displaying the first virtual sub-object under the condition that the camera of the equipment acquires the first real object;
and the first object area is an area where the first object is located.
24. The head-mounted apparatus of claim 19, further comprising:
the second receiving module is used for receiving a second input of the user;
a second display module for displaying a first virtual sub-object of a virtual object on a virtual screen in response to the second input, the first virtual sub-object being created based on the second input.
25. The head-mounted apparatus of claim 19, further comprising:
and the updating module is used for updating the display size of the first virtual sub-object.
26. The head-mounted apparatus of claim 19, further comprising:
and the rotating module is used for rotating the first virtual sub-object around a target axis.
27. A head-mounted device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the operation control method according to any one of claims 1 to 18.
28. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the operation control method according to any one of claims 1 to 18.
CN202010031307.3A 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium Active CN111240483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031307.3A CN111240483B (en) 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031307.3A CN111240483B (en) 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium

Publications (2)

Publication Number Publication Date
CN111240483A CN111240483A (en) 2020-06-05
CN111240483B true CN111240483B (en) 2022-03-29

Family

ID=70870932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031307.3A Active CN111240483B (en) 2020-01-13 2020-01-13 Operation control method, head-mounted device, and medium

Country Status (1)

Country Link
CN (1) CN111240483B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112735393B (en) * 2020-12-29 2023-11-24 深港产学研基地(北京大学香港科技大学深圳研修院) Method, device and system for speech recognition of AR/MR equipment
CN112995506B (en) * 2021-02-09 2023-02-07 维沃移动通信(杭州)有限公司 Display control method, display control device, electronic device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169861A (en) * 2014-02-26 2014-11-26 华为技术有限公司 Method and apparatus for combining contact information, and terminal
CN106527689A (en) * 2016-10-13 2017-03-22 广州视源电子科技股份有限公司 User interface interaction method and system for virtual reality system
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN110420456A (en) * 2018-10-11 2019-11-08 网易(杭州)网络有限公司 The method and device of selecting object, computer storage medium, electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11023093B2 (en) * 2018-05-30 2021-06-01 Microsoft Technology Licensing, Llc Human-computer interface for computationally efficient placement and sizing of virtual objects in a three-dimensional representation of a real-world environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169861A (en) * 2014-02-26 2014-11-26 华为技术有限公司 Method and apparatus for combining contact information, and terminal
CN108604121A (en) * 2016-05-10 2018-09-28 谷歌有限责任公司 Both hands object manipulation in virtual reality
CN106527689A (en) * 2016-10-13 2017-03-22 广州视源电子科技股份有限公司 User interface interaction method and system for virtual reality system
CN109427096A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of automatic guide method and system based on augmented reality
CN110420456A (en) * 2018-10-11 2019-11-08 网易(杭州)网络有限公司 The method and device of selecting object, computer storage medium, electronic equipment

Also Published As

Publication number Publication date
CN111240483A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN108958615B (en) Display control method, terminal and computer readable storage medium
KR101844390B1 (en) Systems and techniques for user interface control
CN111258420B (en) Information interaction method, head-mounted device and medium
US10444908B2 (en) Virtual touchpads for wearable and portable devices
US9651782B2 (en) Wearable tracking device
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
CN102779000B (en) User interaction system and method
CN110737374B (en) Operation method and electronic equipment
CN109558061B (en) Operation control method and terminal
KR20220032059A (en) Touch free interface for augmented reality systems
US11714540B2 (en) Remote touch detection enabled by peripheral device
CN111142675A (en) Input method and head-mounted electronic equipment
CN111240483B (en) Operation control method, head-mounted device, and medium
CN109471586B (en) Keycap color matching method and device and terminal equipment
CN111124136A (en) Virtual picture synchronization method and wearable device
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN111352505B (en) Operation control method, head-mounted device, and medium
CN111158492B (en) Video editing method and head-mounted device
CN114529691A (en) Window control method, electronic device and computer readable storage medium
CN111158548A (en) Screen folding method and electronic equipment
CN111443860B (en) Touch control method and electronic equipment
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
CN111124235B (en) Screen control method and flexible electronic equipment
CN110531905B (en) Icon control method and terminal
CN110536007B (en) Interface display method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant