CN111258482A - Information sharing method, head-mounted device, and medium - Google Patents

Information sharing method, head-mounted device, and medium Download PDF

Info

Publication number
CN111258482A
CN111258482A CN202010031689.XA CN202010031689A CN111258482A CN 111258482 A CN111258482 A CN 111258482A CN 202010031689 A CN202010031689 A CN 202010031689A CN 111258482 A CN111258482 A CN 111258482A
Authority
CN
China
Prior art keywords
virtual
input
sub
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010031689.XA
Other languages
Chinese (zh)
Inventor
陈喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010031689.XA priority Critical patent/CN111258482A/en
Publication of CN111258482A publication Critical patent/CN111258482A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Abstract

The embodiment of the invention discloses an information sharing method, head-mounted equipment and a medium, relates to the technical field of communication, and can solve the problems that the information sharing process is complicated and the operation is inconvenient in the prior art. The method comprises the following steps: receiving a first input of a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate.

Description

Information sharing method, head-mounted device, and medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an information sharing method, a head-mounted device and a medium.
Background
In the interactive logic of the existing instant messaging software, if the text, the picture, the audio and video and other video information need to be transmitted and shared among users, the operation such as a large number of clicks, strokes and the like on a screen needs to be carried out through fingers, so that the process is complicated and the operation is inconvenient.
Disclosure of Invention
The embodiment of the invention provides an information sharing method, which can solve the problems of more complicated information sharing process and inconvenient operation in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information sharing method, including:
receiving a first input of a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen;
in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
In a second aspect, an embodiment of the present invention provides a head-mounted device, including:
the first receiving module is used for receiving a first input of a user to a first surface of a first virtual sub-object of a first virtual object displayed on a virtual screen;
a first sending module, configured to send, in response to the first input, first information to a first contact associated with a first side of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
In a third aspect, an embodiment of the present invention provides a head-mounted device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the information sharing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the information sharing method according to the first aspect are implemented.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate.
Drawings
Fig. 1 is a flowchart of an information sharing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a virtual object of the information sharing method according to the embodiment of the present invention;
fig. 3(a) is one of schematic diagrams of sending information to a contact in an information sharing method according to an embodiment of the present invention;
fig. 3(b) is a second schematic diagram illustrating a message sent to a contact in the message sharing method according to the embodiment of the present invention;
fig. 4 is a schematic diagram illustrating information content displayed by the information sharing method according to the embodiment of the present invention;
fig. 5 is a schematic diagram illustrating an information sharing method according to an embodiment of the present invention for sending information to a plurality of contacts;
fig. 6(a) is a schematic diagram of a rotating first virtual sub-object of the information sharing method according to the embodiment of the present invention;
fig. 6(b) is a schematic diagram illustrating that each surface of a virtual sub-object of the information sharing method represents different types of contacts according to the embodiment of the present invention;
fig. 7(a) is a schematic diagram illustrating a display position of a virtual object according to an information sharing method provided in an embodiment of the present invention;
fig. 7(b) is a schematic diagram of a virtual object displayed in a target area in the information sharing method according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention;
fig. 9 is a hardware schematic diagram of a head-mounted device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides an information sharing method, which comprises the steps of receiving first input of a user to a first surface of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate. The problem that the information sharing process is complex and inconvenient to operate in the prior art can be solved.
Virtual Reality (VR) technology is a computer simulation system technology that creates and experiences a Virtual world. It utilizes a computer to create a simulated environment into which a user is immersed using a systematic simulation of interactive three-dimensional dynamic views and physical behaviors with multi-source information fusion.
Augmented Reality (AR) technology is a technology that integrates real world information and virtual world information, and virtual information content is superimposed in the real world through various sensing devices, so that real world content and virtual information content can be simultaneously embodied in the same picture and space, and natural interaction between a user and a virtual environment is realized.
The AR glasses move the imaging system to a place outside the glasses lens through optical imaging elements such as optical waveguides and the like, so that the imaging system is prevented from blocking external sight. The optical waveguide is a high-transmittance medium similar to an optical fiber for guiding light waves to propagate in the optical waveguide, light output by an imaging system and reflected light of a real scene are integrated and transmitted to human eyes, and hand image information acquired by a camera is processed and analyzed by using a computer vision algorithm, so that hand tracking and recognition can be realized.
Mixed Reality (MR), combining virtual information with a view of the real world, or adding a virtual representation of a real world object to a virtual environment.
The head-mounted device in the embodiment of the invention can be VR glasses, AR glasses, MR glasses, or VR helmet, AR helmet, MR helmet, etc.
According to the related art, various head-mounted devices may sense a direction of acceleration, angular acceleration, or inclination, and display a screen corresponding to the sensed information. The head mounted device may change and display the screen based on the user's movement.
It should be noted that, in the embodiment of the present invention, the first head-mounted device and the second head-mounted device may be the same head-mounted device (e.g., both AR glasses), or may be different head-mounted devices (e.g., the first head-mounted device is AR glasses, and the second head-mounted device is a VR helmet), which is not limited in this embodiment of the present invention.
The virtual screen in the embodiment of the invention is a virtual reality screen, an augmented reality screen or a mixed reality screen of the head-mounted equipment.
The virtual screen in the embodiment of the present invention may be any carrier that can be used to display content projected by a projection device when content is displayed by using AR technology. The projection device may be a projection device using AR technology, such as a head-mounted device or an AR device in the embodiment of the present invention.
When displaying content on the virtual screen by using the AR technology, the projection device may project a virtual scene acquired by (or internally integrated with) the projection device, or a virtual scene and a real scene onto the virtual screen, so that the virtual screen may display the content, thereby showing an effect of superimposing the real scene and the virtual scene to a user.
In connection with different scenarios of AR technology applications, the virtual screen may generally be a display screen of an electronic device (e.g. a mobile phone), a lens of AR glasses, a windshield of a car, a wall of a room, etc. any possible carrier.
The following describes an exemplary process of displaying content on a virtual screen by using AR technology, by taking the virtual screen as a display screen of an electronic device, a lens of AR glasses, and a windshield of an automobile as examples.
In one example, when the virtual screen is a display screen of an electronic device, the projection device may be the electronic device. The electronic equipment can acquire a real scene in the area where the electronic equipment is located through the camera of the electronic equipment, the real scene is displayed on the display screen of the electronic equipment, then the electronic equipment can project a virtual scene acquired by the electronic equipment (or internally integrated) onto the display screen of the electronic equipment, so that the virtual scene can be displayed in a superposition mode in the real scene, and a user can see the effect of the real scene and the virtual scene after superposition through the display screen of the electronic equipment.
In another example, when the virtual screen is a lens of AR glasses, the projection device may be the AR glasses. When the user wears the glasses, the user can see the real scene in the area where the user is located through the lenses of the AR glasses, and the AR glasses can project the acquired (or internally integrated) virtual scene onto the lenses of the AR glasses, so that the user can see the display effect of the real scene and the virtual scene after superposition through the lenses of the AR glasses.
In yet another example, when the virtual screen is a windshield of an automobile, the projection device may be any electronic device. When the user is located in the automobile, the user can see the real scene in the area where the user is located through the windshield of the automobile, and the projection device can project the acquired (or internally integrated) virtual scene onto the windshield of the automobile, so that the user can see the display effect of the real scene and the virtual scene after superposition through the windshield of the automobile.
Of course, in the embodiment of the present invention, the specific form of the virtual screen may not be limited, for example, it may be a non-carrier real space. In this case, when the user is located in the real space, the user can directly see the real scene in the real space, and the projection device can project the acquired (or internally integrated) virtual scene into the real space, so that the user can see the display effect of the real scene and the virtual scene after superposition in the real space.
The virtual object in the embodiment of the present invention is an object in virtual information, and optionally, the virtual object is content displayed on a screen or a lens of the head-mounted device, which corresponds to the surrounding environment the user is viewing, but is not present as a physical embodiment outside the display.
The virtual object may be an AR object. It should be noted that the AR object may be understood as: the AR device analyzes the real object to obtain feature information of the real object (e.g., type information of the real object, appearance information of the real object (e.g., structure, color, shape, etc.), position information of the real object in space, etc.), and constructs an AR model in the AR device according to the feature information.
Optionally, in this embodiment of the present invention, the target virtual object may specifically be a virtual image, a virtual pattern, a virtual character, a virtual picture, or the like.
The head-mounted device in the embodiment of the invention can be a head-mounted device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
An execution main body of the information sharing method provided in the embodiment of the present invention may be the head-mounted device, or may also be a functional module and/or a functional entity capable of implementing the method in the head-mounted device, and specifically may be determined according to actual use requirements, which is not limited in the embodiment of the present invention. The information sharing method provided by the embodiment of the invention is exemplarily described below by taking a head-mounted device as an example.
Referring to fig. 1, an embodiment of the present invention provides an information sharing method applied to a head-mounted device, where the method may include steps 101 to 102 described below.
Step 101, receiving a first input of a user to a first surface of a first virtual sub-object of a first virtual object displayed on a virtual screen.
Optionally, the first input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The first input may also be a first operation. When the first input is executed, the first input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Optionally, the head-mounted device includes a camera, and the camera is configured to collect a hand image of the user and obtain a gesture action of the user through a gesture recognition technology.
Step 102, responding to the first input, and sending first information to a first contact associated with a first surface of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
Optionally, the camera may acquire image information and depth information, and a three-dimensional reconstruction algorithm is used to model a space to obtain three-dimensional space information, where the virtual object is a three-dimensional model designed by a three-dimensional modeling method.
Optionally, the first information includes, but is not limited to, files, text, etc., such as video, pictures, audio, text, etc.
Optionally, the user puts the first information into the first surface of the first virtual sub-object, and after the first surface of the first virtual sub-object is highlighted, the user releases the first information with his or her hands, and the first information can be sent to the first contact.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate.
Optionally, the method further comprises: the N virtual sub-objects are separated by a separation identifier.
Optionally, the separation mark is an opaque line, where opaque means that the transparency of the first line is less than 100%; or the separation marks are voids having a certain width, etc.
Optionally, a second virtual object is included on the virtual screen;
before step 101, the method further comprises:
step 1001, displaying M identifiers on the second virtual object, wherein each identifier indicates different information;
the M identifiers comprise a first identifier, the first identifier indicates the first information, and M is a positive integer.
Optionally, the second virtual object includes different information, such as text, audio/video files, and the like, and the information is displayed on the second virtual object through corresponding identifiers, where the identifiers may be icons, symbols, and the like, and each identifier indicates different information.
Illustratively, as shown in fig. 2, the first virtual object 201 includes N virtual sub-objects, the virtual sub-objects include at least one surface, different surfaces are associated with different contacts, a first surface 20111 of the first virtual sub-object 2011 is associated with a first contact, and the second virtual object 202 displays M identifiers, each of which indicates different information, and the first identifier 2021 indicates the first information.
Optionally, after sending the first information to the first contact associated with the first side of the first virtual sub-object, the first identifier is still displayed on the second virtual object.
Optionally, in step 101, the first input is used to display the first identifier to an area where the first surface of the first virtual sub-object is located.
Optionally, step 101 specifically includes step 1011 or step 1012:
step 1011, receiving a first sub-input of the first identifier and a second sub-input of the first surface of the first virtual sub-object, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first surface of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture.
Optionally, the first sub-input or the second sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The first sub-input or the second sub-input may also be a first sub-operation or a second sub-operation. When the first sub-input or the second sub-input is executed, the first sub-input or the second sub-input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the first sub-input is an input clicking on the first identifier, and the second sub-input is an input clicking on a first side of the first virtual sub-object. For example, the user clicks on the first identifier before clicking on the first side of the first virtual sub-object.
Step 1012, receiving a first input that the user drags the first identifier to the first surface of the first virtual sub-object.
Illustratively, the user points a finger to an area on the virtual screen where the first identifier is located, and drags the first identifier to the first surface of the first virtual child object.
Optionally, the pointing of the finger by the user to the area of the first identifier on the virtual screen may include, but is not limited to, placing the finger by the user on the area of the first identifier on the virtual screen, or pointing the finger of the user to the area of the first identifier on the virtual screen, that is, the finger of the user is not in the area of the first identifier, but points to the area of the first identifier, and has a certain distance from the area of the first identifier.
Optionally, the dragging the first identifier to the first side of the first virtual sub-object may include, but is not limited to, a user dragging the first identifier to the first side of the first virtual sub-object on the virtual screen, or a user dragging the first identifier to an area corresponding to the first side of the first virtual sub-object on the virtual screen, that is, when the first identifier is dragged to the area corresponding to the first side of the first virtual sub-object, a projection of the first identifier in a plane where the first virtual object is located in the first side of the first virtual sub-object. For example, drag the first identification to the area directly in front of the first face of the first virtual sub-object, directly in front referring to the direction closer to the user.
Optionally, step 101 specifically includes:
step 1013, receiving a third sub-input of the first identifier by the user, where the third sub-input is used to control the first identifier to move to a position where the hand is located along with the hand of the user, and the position where the hand is located is an area where the first surface of the first virtual sub-object is located, where the third sub-input includes a third gesture.
Optionally, the third sub-input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The third sub-input may also be a third sub-operation. When the third sub-input is executed, the third sub-input may be a single-point input, such as a sliding input, a click input, or the like performed by using a single finger; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the third gesture is that the hand of the user points to the area where the first identifier is located, and a gesture of taking the first identifier is made, so that the first identifier moves along with the hand of the user, where the hand of the user moves, where the first identifier moves, where the hand of the user moves to the first surface of the first virtual sub-object, and the first identifier moves to the first surface of the first virtual sub-object.
Optionally, the moving of the hand of the user to the first side of the first virtual sub-object may include, but is not limited to, moving the hand of the user to the first side of the first virtual sub-object on the virtual screen, or moving the hand of the user to an area corresponding to the first side of the first virtual sub-object on the virtual screen, that is, when the hand of the user moves to the area corresponding to the first side of the first virtual sub-object, a projection of the hand of the user in a plane where the virtual object is located in the first side of the first virtual sub-object. For example, the user's hand moves to an area directly in front of the first face of the first virtual sub-object, directly in front referring to a direction closer to the user.
Optionally, the moving of the first identifier to the first side of the first virtual sub-object may include, but is not limited to, moving the first identifier to the first side of the first virtual sub-object on the virtual screen, or moving the first identifier to an area corresponding to the first side of the first virtual sub-object on the virtual screen, that is, when the first identifier is moved to the area corresponding to the first side of the first virtual sub-object, a projection of the first identifier in a plane where the virtual object is located in the first side of the first virtual sub-object. For example, the first logo moves to an area directly in front of the first face of the first virtual sub-object, directly in front referring to a direction closer to the user.
In step 102, sending the first information to the first contact associated with the first surface of the first virtual sub-object specifically includes:
step 1021, under the condition that a first preset condition is met, sending first information to a first contact person associated with a first surface of the first virtual sub-object;
wherein, the meeting of the first preset condition comprises: the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time, or receives a second input of the user, wherein the second input comprises a fourth gesture.
Optionally, the second input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The second input may also be a second operation. When the second input is executed, the second input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the fourth gesture is the user throwing the first identifier toward the first side of the first virtual sub-object, or the user placing the first identifier on the first side of the first virtual sub-object, or the user extending a finger, etc.
Optionally, the user places the first identifier on the first side of the first virtual sub-object, which may include, but is not limited to, the user placing the first identifier on the first side of the first virtual sub-object on the virtual screen, or the user placing the first identifier in the first virtual sub-object on the virtual screen if the first side of the first virtual sub-object faces the user, or the user placing the first identifier on the virtual screen in an area corresponding to the first side of the first virtual sub-object, that is, when the first identifier is placed in an area corresponding to the first side of the first virtual sub-object, a projection of the first identifier in a plane where the virtual object is located in the first side of the first virtual sub-object. The placing means that the user releases the first identifier. For example, the first marker is placed in an area directly in front of the first face of the first virtual sub-object, the direct front referring to a direction closer to the user.
Exemplarily, the hand of the user points to the area where the first identifier is located, and a gesture for taking the first identifier is made, then the first identifier moves along with the hand of the user, the hand of the user moves to the area right in front of the first surface of the first virtual sub-object, the first identifier moves to the area right in front of the first surface of the first virtual sub-object, the hand of the user stays in the area right in front of the first surface of the first virtual sub-object for a first preset time, for example, stays for 3 seconds, and then the first information is sent to the first contact associated with the first surface of the first virtual sub-object.
Illustratively, as shown in fig. 3(a), the user places the hand in the area of the first identifier 2021 and makes a gesture to grab the first identifier, and takes the first identifier out of the second virtual object, the user moves the hand with the first identifier to the first face 20111 of the first virtual sub-object 2011, and the first identifier follows the movement of the user's hand, as shown in fig. 3(b), in the case that the first face of the first virtual sub-object faces the user, the user places the first identifier 2021 in the first virtual sub-object 2011, and the user releases the first identifier 2021, that is, the first information can be sent to the first contact.
In the embodiment of the invention, the user can send the first information to the first contact person through some simple gestures, and the operation is simple and quick.
Optionally, after step 102, the method further includes:
step 103, establishing a call connection with the first contact person, and displaying the information content of the first information in a first space area on a virtual screen.
Optionally, the plane of the first space region may be the same plane as the plane of the first surface of the first virtual sub-object, or may be a different plane from the plane of the first surface of the first virtual sub-object, for example, the plane of the first space region is parallel to the plane of the first surface of the first virtual sub-object, and the plane of the first space region is located right in front of the plane of the first surface of the first virtual sub-object, where the right front refers to a direction closer to the user.
Illustratively, the user takes the first identifier out of the second virtual object, in the case that the first side of the first virtual sub-object faces the user, the user places the first identifier in the first virtual sub-object, the user releases the first identifier, the first information can be sent to the first contact person and establishes a call connection with the first contact person, and the information content of the first information is displayed in the first space area on the virtual screen.
Optionally, after the first contact agrees to receive the first information, a call connection is established with the first contact, and information content of the first information is displayed in a first space region on the virtual screen.
Optionally, in step 103, displaying the information content of the first information in a first space region on a virtual screen, specifically including:
step 1031, displaying the information content of the first information in the first space region, and displaying a virtual identifier, wherein the virtual identifier is used for indicating the operation position and the gesture information of the hand of the user on the information content display interface of the first information.
Optionally, the head-mounted device is a first head-mounted device, the camera acquires an image of a hand of a user, obtains motion information of the hand, and displays a virtual identifier representing the motion information of the hand of the user on a first information display interface, as shown in fig. 4, the virtual identifier 401 is a model of the hand of the user, a display state of the virtual identifier 401 is updated in real time according to a change of the hand motion or a change of a position of the user, and synchronizes the first information and a display picture of the virtual identifier to a virtual screen of a second head-mounted device of the first contact, a synchronization picture displayed on the virtual screen of the second head-mounted device of the first contact is updated in real time according to a change of information content of the first information or a display state of the virtual identifier, that is, the first contact can see the hand motion of the user and the display content of the first information, and the user can explain and demonstrate the content of the first information to the first contact by combining voice and the hand motion.
Optionally, the position and shape of the virtual identifier are updated in real time according to the change of the hand motion or the position of the user.
In the embodiment of the invention, after the first information is sent to the first contact person, the information content of the first information and the virtual identifier indicating the hand operation position and the gesture information of the user are displayed on the virtual screen, and the display pictures of the first information and the virtual identifier are synchronized to the virtual screen of the second head-mounted device of the first contact person, so that the user can explain and demonstrate the content of the first information to the first contact person by combining voice and hand operation.
Optionally, the method further comprises:
and 104, receiving a third input of the user under the condition of receiving the second information sent by the first contact.
Optionally, the third input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The third input may also be a third operation. When the third input is executed, the third input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Optionally, the third input may be a gesture, illustratively a finger of a user pointing at the second information.
And step 105, responding to the third input, establishing a call connection with the first contact person, and displaying the information content of the second information in a second space area.
Optionally, the information content of the second information is displayed in a second space area, and a target virtual identifier is displayed, wherein the target virtual identifier is used for indicating the operation position and the gesture information of the hand of the first contact person on the information content display interface of the second information.
Optionally, the plane of the second spatial region may be the same plane as the plane of the first face of the first virtual sub-object, or may be a different plane from the plane of the first face of the first virtual sub-object, for example, the plane of the second spatial region is parallel to the plane of the first face of the first virtual sub-object, and the plane of the second spatial region is located directly in front of the plane of the first face of the first virtual sub-object, where the directly front refers to a direction closer to the user.
Illustratively, the user receives the second information sent by the first contact, receives a third input from the user, such as clicking the second information, establishes a call connection with the first contact, and displays the information content of the second information in the second spatial region.
Optionally, the method further comprises:
and 106, receiving fourth input of the target surface of the T second virtual sub-objects from the user.
Optionally, the fourth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fourth input may also be a fourth operation. When the fourth input is executed, the fourth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the user makes a gesture of grabbing a first identifier, which follows the hand movement of the user, as shown in fig. 5, in the case that the target surface of the T second virtual sub-objects faces the user, the user places the first identifier 2021 in a first second virtual sub-object 2012 of the T second virtual sub-objects, but does not loose the hand to put the first identifier down, at which time a copy of the first identifier 20211 reservation is made in the first second virtual sub-object 2012, the user continues to make the gesture of grabbing the first identifier, which follows the hand movement of the user, the user takes the first identifier in the hand out of the first second virtual sub-object 2012, places the first identifier in a second virtual sub-object of the T second virtual sub-objects, but does not loose the hand to put the first identifier, at which time a copy of the first identifier reservation is made in the second virtual sub-object, the user continues to make a gesture of grabbing the first identifier, the first identifier moves along with the hand of the user, the user takes the first identifier in the hand out of the second virtual sub-object 2012, the same operation is continuously performed on other second virtual sub-objects, after the first identifier is placed in the T-th second virtual sub-object of the T second virtual sub-objects, the user makes a gesture of releasing the first identifier in the hand with a loose hand, that is, the first information can be sent to the T second contacts, and the T second contacts are associated with the target surfaces of the T second virtual sub-objects.
Optionally, after the first identifier is placed in the tth second virtual sub-object of the T second virtual sub-objects and the first identifier in the hand is released, the first identifier is still displayed on the second virtual object.
Illustratively, the user clicks on the first identifier and clicks on the target surfaces of the T second virtual sub-objects.
Illustratively, the user's finger points to the first marker and to the target surface of the T second virtual sub-objects.
Illustratively, the user drags the first identifier to the target surfaces of the T second virtual sub-objects, respectively.
Illustratively, the user clicks on the first identifier, and a gesture is made to pull out the T second virtual sub-objects with the target surfaces of the T second virtual sub-objects facing the user.
In step 102, sending the first information to the first contact associated with the first surface of the first virtual sub-object specifically includes:
step 1022, sending the first information to the first contact and T second contacts associated with the first surface of the first virtual sub-object, where the T second contacts are associated with target surfaces of the T second virtual sub-objects, and T is a positive integer.
Illustratively, the user's finger points at the first identifier and points at the first side of the first virtual sub-object and the target sides of the T second virtual sub-objects, and then the first information is sent to the first contact and the T second contacts.
Optionally, after sending the first information to the first contact and the T second contacts associated with the first side of the first virtual sub-object, the first identifier is still displayed on the second virtual object.
Optionally, after the first information is sent to the first contact and the T second contacts associated with the first side of the first virtual sub-object, a multi-party call connection is established with the first contact and the T second contacts.
Optionally, after sending the first information to the first contact and the T second contacts associated with the first side of the first virtual sub-object, displaying information content of the first information.
In the embodiment of the invention, the user can send the first information to the plurality of contacts through some simple gestures and establish multi-party call connection with the plurality of contacts, so that the operation is simple and rapid.
Optionally, the first side of the first virtual sub-object is oriented in the same direction as the virtual screen.
Specifically, the virtual screen faces the user, and the first side of the first virtual sub-object faces the user.
The method further comprises the following steps:
and step 107, receiving a fifth input of the user to the first virtual sub-object.
Optionally, the fifth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fifth input may also be a fifth operation. When the fifth input is executed, the fifth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Step 108, in response to the fifth input, rotating the S virtual sub-objects such that second faces of the S virtual sub-objects face the user;
wherein S is a positive integer and is less than or equal to N.
Illustratively, as shown in fig. 6(a), each virtual sub-object has 6 planes, and a user can rotate any one virtual sub-object up, down, left and right through gestures. Receiving a fifth input to the first virtual sub-object from the user, such as a gesture to rotate the first virtual sub-object to the left, the first virtual sub-object rotates to the left with the second face 20112 of the first virtual sub-object facing the user.
Optionally, in response to the fifth input, all virtual sub-objects are rotated with their second faces facing the user.
Optionally, in response to the fifth input, the virtual sub-objects in the same row as the first virtual sub-object are both rotated with their second faces facing the user.
Optionally, in response to the fifth input, the virtual sub-objects in the same column as the first virtual sub-object are both rotated with their second faces facing the user.
Optionally, different faces of the virtual sub-object represent different categories of contacts. Illustratively, as shown in fig. 6(b), the category of the contact corresponding to the first side 20131 of the virtual sub-object 2013 is colleague, the category of the contact corresponding to the second side 20132 of the virtual sub-object 2013 is friend, and the category of the contact corresponding to the second side 20133 of the virtual sub-object 2013 is family.
Optionally, in response to the fifth input, all of the virtual sub-objects are rotated with the second sides of all of the virtual sub-objects facing the user, the contacts corresponding to the second sides of all of the virtual sub-objects being of the same type.
In the embodiment of the invention, different surfaces of the virtual sub-object represent different types of contacts, and a user can rotate the virtual sub-object through some simple gestures, so that the different surfaces of the virtual sub-object face the user, and the user can see different types of contact information.
Optionally, the second virtual object comprises at least one face, the identity of the different face display being used to indicate different types of information.
Optionally, the second virtual object may be rotated so that a different side of the second virtual object faces the user and the user may see different types of information, illustratively, as shown in fig. 6(a), the second virtual object 202 is rotated downward and the other side of the second virtual object faces the user and new identification may be seen.
Optionally, the head mounted device comprises a camera.
Before step 101, the method further comprises:
and step 1002, acquiring an image acquired by a camera.
The camera acquires images in a real environment, and the real environment is the real environment within the visual angle range of the user.
Step 1003, in a case that the image includes the target real object, displaying a virtual object in a first area of a virtual screen, where the first area is an area corresponding to an area where the target real object is located, and the virtual object includes the first virtual object and the second virtual object.
Optionally, in a case that the image acquired by the camera in real time does not include the target real object, that is, the user's sight line is away from the target real object, the display of the virtual object is cancelled on the virtual screen, and when the user's sight line returns to the target real object again, the virtual object is displayed on the virtual screen.
Optionally, the first area is the same as the area where the target object is located, or the first area is a part of the area where the target object is located, or the first area includes the area where the target object is located, or the first area is adjacent to the area where the target object is located, for example, the first area is located in front of, above, or the like the area where the target object is located.
Optionally, the case that the image includes the target real object includes: the target object appears in the image or the image comprises the target object, and the environment around the target object is the target environment. For example, the target object is a sofa, the target environment is that a tea table is arranged 0.5 m in front of the sofa, a television is arranged 1 m in front of the tea table, and a water dispenser is arranged 0.3 m in the left side of the sofa.
In the case that the image in the real environment captured by the camera includes the target real object, the virtual object is displayed in the first area of the virtual screen, and illustratively, the image in the real environment captured by the camera includes a table, and then the virtual object is displayed in the first area of the virtual screen, where the first area is located on the upper surface of the table, or the first area is located directly above the upper surface of the table.
In the embodiment of the invention, the virtual object is displayed in the first area of the virtual screen by acquiring the image acquired by the camera under the condition that the image comprises the target object, so that the virtual object can be displayed when the visual angle of a user returns to the target area.
Optionally, the second area of the virtual screen includes a second identifier.
Optionally, the second identifier is used to indicate the virtual object.
Before step 1002, the method further includes:
step 1004, receiving a sixth input of the user to the second identifier and the third spatial region.
Optionally, the sixth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The sixth input may also be a sixth operation. When the sixth input is executed, the sixth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Step 1005, responding to the sixth input, and displaying a virtual object in a third area corresponding to the third space area on the virtual screen, where the third space area is an area where the target real object is located.
Optionally, the third area is the same as the third spatial area, or the third area is a part of the third spatial area, or the third area includes the third spatial area, or the third area is adjacent to the third spatial area, for example, the third area is located in front of, above, or the like the third spatial area.
For example, as shown in fig. 7(a), the second identifier 701 is located in the second area 702 of the virtual screen, and a sixth input of the user to the second identifier 701 and the third space area 703 is received, for example, the second identifier is dragged to the third space area, the target object is a wall, and as shown in fig. 7(b), the virtual object 704 is displayed in the third area, which is a part of the third space area 703, and the user may continue to resize the virtual object by using a finger.
The head-mounted device stores information of a virtual object with a set area and an adjusted size, for example, space coordinates and surrounding environment information of the virtual object are stored, the virtual object is on one wall, the right half of the wall comprises a door, the left side of the wall is vertically connected with the other wall comprising a window, image information of the surrounding environment of the target object can also be stored, when the visual angle of a user returns to the area where the target object is located, the virtual object is displayed, further illustratively, when the visual angle of the user falls in a third space area, the camera collects images in the real environment, the collected images are compared with the previously stored images of the surrounding environment of the target object, and the virtual object is displayed in the third area under the condition that the target object, the position information of the surrounding environment and the image information are all matched.
Optionally, the second identifier is always displayed on a virtual screen of the head-mounted device, that is, the user may see the second identifier at any time, the user may drag the second identifier to any one or more spatial regions, the head-mounted device may record spatial coordinates of the virtual object, and the user may see the virtual object in the plurality of spatial regions.
In the embodiment of the present invention, by receiving a sixth input of the user to the second identifier and the third space region, and in response to the sixth input, displaying the virtual object in the third region corresponding to the third space region on the virtual screen, it is possible to implement that the user can place the virtual object in a plurality of space regions by a simple gesture, and the user can see the virtual object in the plurality of space regions.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate.
As shown in fig. 8, an embodiment of the present invention provides a head-mounted device 800, where the head-mounted device 800 includes:
a first receiving module 801, configured to receive a first input by a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; a first sending module 802, configured to send, in response to the first input, first information to a first contact associated with a first side of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
Optionally, a second virtual object is included on the virtual screen; the head-mounted device further comprises: a first display module for displaying M identifiers on the second virtual object, each identifier indicating different information; the M identifiers comprise a first identifier, the first identifier indicates the first information, and M is a positive integer.
Optionally, the second virtual object comprises at least one face, the identity of the different face display being used to indicate different types of information.
Optionally, the first input is used to display the first identifier to an area where the first surface of the first virtual sub-object is located.
Optionally, the first receiving module is specifically configured to: receiving a first sub-input of the first identifier and a second sub-input of the first surface of the first virtual sub-object from a user, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first surface of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture; or, receiving a first input that a user drags the first identifier to the first surface of the first virtual sub-object.
Optionally, the first receiving module is specifically configured to: receiving a third sub-input of the first identifier by a user, wherein the third sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, the position where the hand is located is an area where a first surface of the first virtual sub-object is located, and the third sub-input comprises a third gesture; the first sending module includes: the first sending unit is used for sending first information to a first contact person associated with the first surface of the first virtual sub-object under the condition that a first preset condition is met; wherein, the meeting of the first preset condition comprises: the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time, or receives a second input of the user, wherein the second input comprises a fourth gesture.
Optionally, the head-mounted device further comprises: the first communication module is used for establishing communication connection with the first contact person; and the second display module is used for displaying the information content of the first information in the first space area on the virtual screen.
Optionally, the second display module is specifically configured to: displaying the information content of the first information in a first space area, and displaying a virtual identifier, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on an information content display interface of the first information.
Optionally, the head-mounted device further comprises: the second receiving module is used for receiving third input of the user under the condition of receiving second information sent by the first contact person; the second communication module is used for responding to the third input and establishing communication connection with the first contact person; and the third display module is used for displaying the information content of the second information in a second space area.
Optionally, the head-mounted device further comprises: the third receiving module is used for receiving fourth input of the user to the target surfaces of the T second virtual sub-objects; the first sending module specifically includes: and the second sending unit is used for sending the first information to a first contact and T second contacts which are associated with the first surface of the first virtual sub-object, wherein the T second contacts are associated with target surfaces of the T second virtual sub-objects, and T is a positive integer.
Optionally, the first side of the first virtual sub-object is oriented in the same direction as the virtual screen; the head-mounted device further comprises: a fourth receiving module, configured to receive a fifth input to the first virtual sub-object from the user; a rotation module for rotating, in response to the fifth input, the S virtual sub-objects such that second faces of the S virtual sub-objects face the user; wherein S is a positive integer and is less than or equal to N.
Optionally, the head mounted device comprises a camera; the head-mounted device further comprises: the acquisition module is used for acquiring images acquired by the camera; and the fourth display module is used for displaying a first virtual object in a first area of a virtual screen under the condition that the image comprises the target object, wherein the first area is an area corresponding to the area where the target object is located.
Optionally, the second area of the virtual screen comprises a second identifier; the head-mounted device further comprises: a fifth receiving module, configured to receive a sixth input of the second identifier and the third spatial region from the user; and the fifth display module is used for responding to the sixth input, displaying a virtual object in a third area corresponding to the third space area on the virtual screen, wherein the third space area is an area where the target real object is located.
Optionally, the N virtual sub-objects are separated by a separation identifier.
The head-mounted device provided by the embodiment of the present invention can implement each process implemented by the head-mounted device in the above method embodiments, and is not described herein again to avoid repetition.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate.
Fig. 9 is a schematic diagram of a hardware structure of a head-mounted device for implementing various embodiments of the present invention, and as shown in fig. 9, the head-mounted device 900 includes but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. Those skilled in the art will appreciate that the configuration of the head-mounted device shown in fig. 9 does not constitute a limitation of the head-mounted device, and that the head-mounted device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In embodiments of the present invention, the head-mounted device includes, but is not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, and the like.
The user input unit 907 is configured to receive a first input from a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen; a processor 910, configured to send, in response to the first input, first information to a first contact associated with a first side of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
The embodiment of the invention provides a head-mounted device, which can receive a first input of a user to a first surface of a first virtual sub-object of a first virtual object displayed on a virtual screen; in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid information sharing and are simple and convenient to operate.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The head-mounted device provides wireless broadband internet access to the user through the network module 902, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may also provide audio output related to a specific function performed by the headset 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The head-mounted device 900 also includes at least one sensor 905, such as a gesture sensor, a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 9061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 9061 and/or backlight when the head-mounted device 900 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of a head-mounted device (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), and identify related functions of vibration (such as pedometer, tapping); the sensors 905 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The display unit 906 may include a hologram device, which may form a three-dimensional (3D) image (hologram) in the air by using light interference, a projector (not shown in the drawings). The projector may display an image by projecting light onto a screen. The screen may be located inside or outside the head-mounted device.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the head-mounted device. Specifically, the user input unit 907 includes a touch panel 9071 and other input devices 9072. The touch panel 9071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9071 (e.g., operations by a user on or near the touch panel 9071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9071. Specifically, the other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, and the like), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation on or near the touch panel 9071, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two independent components to implement the input and output functions of the head-mounted device, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the head-mounted device, which is not limited herein.
The interface unit 908 is an interface for connecting an external device to the head-mounted apparatus 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the headset 900 or may be used to transmit data between the headset 900 and an external device.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 910 is a control center of the head-mounted device, connects various parts of the whole head-mounted device by using various interfaces and lines, and performs various functions of the head-mounted device and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the head-mounted device. Processor 910 may include one or more processing units; alternatively, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is understood that the modem processor may not be integrated into the processor 910, and the processor 910 may detect a gesture of a user and determine a control command corresponding to the gesture according to an embodiment of the present invention.
The head-mounted device 900 may also include a power supply 911 (e.g., a battery) for powering the various components, and optionally, the power supply 911 may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the head-mounted device 900 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a head-mounted device, including a processor 910, a memory 909, and a computer program that is stored in the memory 909 and can be run on the processor 910, and when the computer program is executed by the processor 910, the processes of the information sharing method embodiment are implemented, and the same technical effect can be achieved, and details are not described here to avoid repetition.
Optionally, in this embodiment of the present invention, the head-mounted device in the above embodiment may be an AR device. Specifically, when the head-mounted device in the above embodiment is an AR device, the AR device may include all or part of the functional modules in the head-mounted device. Of course, the AR device may also include functional modules not included in the head mounted device described above.
It is to be understood that, in the embodiment of the present invention, when the head-mounted device in the above-described embodiment is an AR device, the head-mounted device may be a head-mounted device integrated with AR technology. The AR technology is a technology for realizing the combination of a real scene and a virtual scene. By adopting the AR technology, the visual function of human can be restored, so that human can experience the feeling of combining a real scene and a virtual scene through the AR technology, and further the human can experience the experience of being personally on the scene better.
Taking the AR device as AR glasses as an example, when the user wears the AR glasses, the scene viewed by the user is generated by processing through the AR technology, that is, the virtual scene can be displayed in the real scene in an overlapping manner through the AR technology. When the user operates the content displayed by the AR glasses, the user can see that the AR glasses peel off the real scene, so that a more real side is displayed to the user. For example, only the case of the carton can be observed when a user visually observes one carton, but the user can directly observe the internal structure of the carton through AR glasses when the user wears the AR glasses.
The AR equipment can comprise the camera, so that the AR equipment can be combined with the virtual picture to display and interact on the basis of the picture shot by the camera. For example, in the embodiment of the present invention, the AR device may synchronize the virtual screen information generated when the user uses the AR device to perform an entertainment activity to the display screens of other AR devices, so that virtual screen sharing can be implemented between the AR devices.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the information sharing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a head-mounted device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (30)

1. An information sharing method is applied to a head-mounted device, and is characterized by comprising the following steps:
receiving a first input of a user to a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen;
in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
2. The method of claim 1, wherein the virtual screen includes a second virtual object thereon;
before the receiving of the first input of the user to the first side of the first virtual sub-object of the first virtual object displayed on the virtual screen, the method further includes:
displaying M identifiers on the second virtual object, wherein each identifier indicates different information;
the M identifiers comprise a first identifier, the first identifier indicates the first information, and M is a positive integer.
3. The method of claim 2, wherein the second virtual object comprises at least one surface, and wherein the identifiers displayed by different surfaces are used to indicate different types of information.
4. The method of claim 2, wherein the first input is configured to display the first identifier to an area of the first virtual sub-object where the first side is located.
5. The method of claim 4, wherein receiving a first input from a user into a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen comprises:
receiving a first sub-input of the first identifier and a second sub-input of the first surface of the first virtual sub-object from a user, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first surface of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture;
or, receiving a first input that a user drags the first identifier to the first surface of the first virtual sub-object.
6. The method of claim 4, wherein receiving a first input from a user into a first side of a first virtual sub-object of a first virtual object displayed on a virtual screen comprises:
receiving a third sub-input of the first identifier by a user, wherein the third sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, the position where the hand is located is an area where a first surface of the first virtual sub-object is located, and the third sub-input comprises a third gesture;
the sending the first information to the first contact associated with the first side of the first virtual sub-object includes:
under the condition that a first preset condition is met, first information is sent to a first contact person associated with a first face of the first virtual sub-object;
wherein, the meeting of the first preset condition comprises: the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time, or receives a second input of the user, wherein the second input comprises a fourth gesture.
7. The method of claim 1, wherein after sending the first information to the first contact associated with the first side of the first virtual sub-object, further comprising:
and establishing communication connection with the first contact person, and displaying the information content of the first information in a first space area on a virtual screen.
8. The method of claim 7, wherein the displaying the information content of the first information in the first spatial region on the virtual screen comprises:
displaying the information content of the first information in a first space area, and displaying a virtual identifier, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on an information content display interface of the first information.
9. The method of claim 1, further comprising:
receiving a third input of the user under the condition of receiving second information sent by the first contact;
and responding to the third input, establishing a call connection with the first contact person, and displaying the information content of the second information in a second space area.
10. The method of claim 1, further comprising:
receiving fourth input of the user on target surfaces of the T second virtual sub-objects;
the sending the first information to the first contact associated with the first side of the first virtual sub-object includes:
and sending first information to a first contact and T second contacts which are associated with the first surface of the first virtual sub-object, wherein the T second contacts are associated with target surfaces of the T second virtual sub-objects, and T is a positive integer.
11. The method of claim 1, wherein the first side of the first virtual sub-object is oriented in the same direction as the virtual screen;
the method further comprises the following steps:
receiving a fifth input of the first virtual sub-object from the user;
in response to the fifth input, rotating S virtual sub-objects such that second faces of the S virtual sub-objects face the user;
wherein S is a positive integer and is less than or equal to N.
12. The method of claim 2, wherein the head-mounted device comprises a camera;
before the receiving of the first input of the user to the first side of the first virtual sub-object of the first virtual object displayed on the virtual screen, the method further includes:
acquiring an image acquired by a camera;
and under the condition that the image comprises the target real object, displaying a virtual object in a first area of a virtual screen, wherein the first area is an area corresponding to the area where the target real object is located, and the virtual object comprises the first virtual object and the second virtual object.
13. The method of claim 12, wherein the second area of the virtual screen includes a second identifier;
before the image of acquireing the camera collection, still include:
receiving a sixth input of the user to the second identifier and the third spatial region;
responding to the sixth input, and displaying a virtual object in a third area corresponding to the third space area on the virtual screen, wherein the third space area is an area where the target real object is located.
14. The method of claim 1, wherein the N virtual sub-objects are separated by a separation identifier.
15. A head-mounted device, comprising:
the first receiving module is used for receiving a first input of a user to a first surface of a first virtual sub-object of a first virtual object displayed on a virtual screen;
a first sending module, configured to send, in response to the first input, first information to a first contact associated with a first side of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
16. The head-mounted device of claim 15, wherein the virtual screen includes a second virtual object thereon;
the head-mounted device further comprises:
a first display module for displaying M identifiers on the second virtual object, each identifier indicating different information;
the M identifiers comprise a first identifier, the first identifier indicates the first information, and M is a positive integer.
17. The head-mounted device of claim 16, wherein the second virtual object comprises at least one face, and wherein the indicia of the different face displays are indicative of different types of information.
18. The head-mounted device of claim 16, wherein the first input is configured to display the first identification to an area of the first face of the first virtual sub-object.
19. The head-mounted device of claim 18, wherein the first receiving module is specifically configured to:
receiving a first sub-input of the first identifier and a second sub-input of the first surface of the first virtual sub-object from a user, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first surface of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture;
or, receiving a first input that a user drags the first identifier to the first surface of the first virtual sub-object.
20. The head-mounted device of claim 18, wherein the first receiving module is specifically configured to:
receiving a third sub-input of the first identifier by a user, wherein the third sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, the position where the hand is located is an area where a first surface of the first virtual sub-object is located, and the third sub-input comprises a third gesture;
the first sending module includes:
the first sending unit is used for sending first information to a first contact person associated with the first surface of the first virtual sub-object under the condition that a first preset condition is met;
wherein, the meeting of the first preset condition comprises: the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time, or receives a second input of the user, wherein the second input comprises a fourth gesture.
21. The head-mounted apparatus of claim 15, further comprising:
the first communication module is used for establishing communication connection with the first contact person;
and the second display module is used for displaying the information content of the first information in the first space area on the virtual screen.
22. The head-mounted device of claim 21, wherein the second display module is specifically configured to:
displaying the information content of the first information in a first space area, and displaying a virtual identifier, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on an information content display interface of the first information.
23. The head-mounted apparatus of claim 15, further comprising:
the second receiving module is used for receiving third input of the user under the condition of receiving second information sent by the first contact person;
the second communication module is used for responding to the third input and establishing communication connection with the first contact person;
and the third display module is used for displaying the information content of the second information in a second space area.
24. The head-mounted apparatus of claim 15, further comprising:
the third receiving module is used for receiving fourth input of the user to the target surfaces of the T second virtual sub-objects;
the first sending module specifically includes:
and the second sending unit is used for sending the first information to a first contact and T second contacts which are associated with the first surface of the first virtual sub-object, wherein the T second contacts are associated with target surfaces of the T second virtual sub-objects, and T is a positive integer.
25. The head-mounted device of claim 15, wherein the first face of the first virtual sub-object is oriented the same as the virtual screen;
the head-mounted device further comprises:
a fourth receiving module, configured to receive a fifth input to the first virtual sub-object from the user;
a rotation module for rotating, in response to the fifth input, the S virtual sub-objects such that second faces of the S virtual sub-objects face the user;
wherein S is a positive integer and is less than or equal to N.
26. The head-mounted apparatus of claim 15, wherein the head-mounted apparatus comprises a camera;
the head-mounted device further comprises:
the acquisition module is used for acquiring images acquired by the camera;
and the fourth display module is used for displaying a first virtual object in a first area of a virtual screen under the condition that the image comprises the target object, wherein the first area is an area corresponding to the area where the target object is located.
27. The head-mounted device of claim 26, wherein the second area of the virtual screen comprises a second identification;
the head-mounted device further comprises:
a fifth receiving module, configured to receive a sixth input of the second identifier and the third spatial region from the user;
and the fifth display module is used for responding to the sixth input, displaying a virtual object in a third area corresponding to the third space area on the virtual screen, wherein the third space area is an area where the target real object is located.
28. The head-mounted device of claim 15, wherein the N virtual sub-objects are separated by a separation indicator.
29. A head-mounted device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information sharing method according to any one of claims 1 to 14.
30. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the information sharing method according to any one of claims 1 to 14.
CN202010031689.XA 2020-01-13 2020-01-13 Information sharing method, head-mounted device, and medium Pending CN111258482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031689.XA CN111258482A (en) 2020-01-13 2020-01-13 Information sharing method, head-mounted device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031689.XA CN111258482A (en) 2020-01-13 2020-01-13 Information sharing method, head-mounted device, and medium

Publications (1)

Publication Number Publication Date
CN111258482A true CN111258482A (en) 2020-06-09

Family

ID=70946853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031689.XA Pending CN111258482A (en) 2020-01-13 2020-01-13 Information sharing method, head-mounted device, and medium

Country Status (1)

Country Link
CN (1) CN111258482A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347526A (en) * 2021-07-08 2021-09-03 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100093400A1 (en) * 2008-10-10 2010-04-15 Lg Electronics Inc. Mobile terminal and display method thereof
US20100169836A1 (en) * 2008-12-29 2010-07-01 Verizon Data Services Llc Interface cube for mobile device
CN102325215A (en) * 2011-05-31 2012-01-18 宇龙计算机通信科技(深圳)有限公司 Message sending method and mobile terminal
US20130263059A1 (en) * 2012-03-28 2013-10-03 Innovative Icroms, S.L. Method and system for managing and displaying mutlimedia contents
CN104168351A (en) * 2013-05-20 2014-11-26 北京三星通信技术研究有限公司 Method and device for processing contact information
CN108604119A (en) * 2016-05-05 2018-09-28 谷歌有限责任公司 Virtual item in enhancing and/or reality environment it is shared
CN108834083A (en) * 2018-05-22 2018-11-16 朱小军 A kind of multi-function telephones communication system
CN109189288A (en) * 2017-09-05 2019-01-11 南京知行新能源汽车技术开发有限公司 Data processing system, computer implemented method and non-transitory machine-readable media
CN109471742A (en) * 2018-11-07 2019-03-15 Oppo广东移动通信有限公司 Information processing method, device, electronic equipment and readable storage medium storing program for executing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100093400A1 (en) * 2008-10-10 2010-04-15 Lg Electronics Inc. Mobile terminal and display method thereof
US20100169836A1 (en) * 2008-12-29 2010-07-01 Verizon Data Services Llc Interface cube for mobile device
CN102325215A (en) * 2011-05-31 2012-01-18 宇龙计算机通信科技(深圳)有限公司 Message sending method and mobile terminal
US20130263059A1 (en) * 2012-03-28 2013-10-03 Innovative Icroms, S.L. Method and system for managing and displaying mutlimedia contents
CN104168351A (en) * 2013-05-20 2014-11-26 北京三星通信技术研究有限公司 Method and device for processing contact information
CN108604119A (en) * 2016-05-05 2018-09-28 谷歌有限责任公司 Virtual item in enhancing and/or reality environment it is shared
CN109189288A (en) * 2017-09-05 2019-01-11 南京知行新能源汽车技术开发有限公司 Data processing system, computer implemented method and non-transitory machine-readable media
CN108834083A (en) * 2018-05-22 2018-11-16 朱小军 A kind of multi-function telephones communication system
CN109471742A (en) * 2018-11-07 2019-03-15 Oppo广东移动通信有限公司 Information processing method, device, electronic equipment and readable storage medium storing program for executing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347526A (en) * 2021-07-08 2021-09-03 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium
CN113347526B (en) * 2021-07-08 2022-11-22 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium

Similar Documents

Publication Publication Date Title
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
CN108499105B (en) Method, device and storage medium for adjusting visual angle in virtual environment
CN111258420B (en) Information interaction method, head-mounted device and medium
US9651782B2 (en) Wearable tracking device
US10776618B2 (en) Mobile terminal and control method therefor
CN103858073B (en) Augmented reality device, method of operating augmented reality device, computer-readable medium
CN102779000B (en) User interaction system and method
KR102099834B1 (en) Electric device and operation method thereof
US20200293177A1 (en) Displaying applications
US20200387213A1 (en) Artificial reality systems with personal assistant element for gating user interface elements
WO2021136266A1 (en) Virtual image synchronization method and wearable device
CN108474950A (en) HMD device and its control method
JP2021527275A (en) Methods and Devices for Providing Input for Head-Worn Image Display Devices
US20200388247A1 (en) Corner-identifiying gesture-driven user interface element gating for artificial reality systems
US11422380B2 (en) Eyewear including virtual scene with 3D frames
CN112817453A (en) Virtual reality equipment and sight following method of object in virtual reality scene
CN110233929A (en) A kind of display control method and terminal device
CN110007822A (en) A kind of interface display method and terminal device
CN111352505B (en) Operation control method, head-mounted device, and medium
CN111240483B (en) Operation control method, head-mounted device, and medium
CN111093033B (en) Information processing method and device
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
CN111258482A (en) Information sharing method, head-mounted device, and medium
US20220365344A1 (en) Augmented reality gaming using virtual eyewear beams
CN114115544B (en) Man-machine interaction method, three-dimensional display device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination