CN111258482B - Information sharing method, head-mounted device and medium - Google Patents

Information sharing method, head-mounted device and medium Download PDF

Info

Publication number
CN111258482B
CN111258482B CN202010031689.XA CN202010031689A CN111258482B CN 111258482 B CN111258482 B CN 111258482B CN 202010031689 A CN202010031689 A CN 202010031689A CN 111258482 B CN111258482 B CN 111258482B
Authority
CN
China
Prior art keywords
virtual
input
sub
information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010031689.XA
Other languages
Chinese (zh)
Other versions
CN111258482A (en
Inventor
陈喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010031689.XA priority Critical patent/CN111258482B/en
Publication of CN111258482A publication Critical patent/CN111258482A/en
Application granted granted Critical
Publication of CN111258482B publication Critical patent/CN111258482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information sharing method, head-mounted equipment and a medium, relates to the technical field of communication, and can solve the problems of complex information sharing process and inconvenient operation in the prior art. The method comprises the following steps: receiving a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient.

Description

Information sharing method, head-mounted device and medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an information sharing method, a head-mounted device and a medium.
Background
In the interactive logic of the existing instant messaging software, if the text, picture, audio-video information and other frequency information are required to be transmitted and shared among users, the operations such as clicking, scratching and the like on the screen can be completed only by a large number of fingers, the process is complex, and the operation is inconvenient.
Disclosure of Invention
The embodiment of the invention provides an information sharing method, which can solve the problems of complicated information sharing process and inconvenient operation in the prior art.
In order to solve the technical problems, the invention is realized as follows:
In a first aspect, an embodiment of the present invention provides an information sharing method, including:
Receiving a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen;
responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
In a second aspect, an embodiment of the present invention provides a head-mounted device, including:
The first receiving module is used for receiving a first input of a first face of a first virtual sub-object of a first virtual object displayed on the virtual screen by a user;
a first sending module, configured to send first information to a first contact associated with a first face of the first virtual sub-object in response to the first input;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
In a third aspect, an embodiment of the present invention provides a head-mounted device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program when executed by the processor implements the steps of the information sharing method according to the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the information sharing method according to the first aspect.
In the embodiment of the invention, the head-mounted device receives a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient.
Drawings
FIG. 1 is a flowchart of an information sharing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a virtual object of an information sharing method according to an embodiment of the present invention;
fig. 3 (a) is one of schematic diagrams of sending information to a contact according to the information sharing method provided by the embodiment of the present invention;
fig. 3 (b) is a second schematic diagram of sending information to a contact in the information sharing method according to the embodiment of the present invention;
fig. 4 is a schematic diagram of displaying information content of an information sharing method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of sending information to multiple contacts in the information sharing method provided by the embodiment of the present invention;
fig. 6 (a) is a schematic diagram of a rotating first virtual sub-object of an information sharing method according to an embodiment of the present invention;
FIG. 6 (b) is a schematic diagram illustrating that each face of a virtual sub-object of the information sharing method according to the embodiment of the present invention represents different types of contacts;
Fig. 7 (a) is a schematic diagram illustrating setting a display position of a virtual object in the information sharing method according to the embodiment of the present invention;
Fig. 7 (b) is a schematic diagram of a virtual object displayed in a target area according to an embodiment of the information sharing method of the present invention;
Fig. 8 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention;
fig. 9 is a schematic hardware diagram of a head-mounted device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, and are not used to describe a particular order of inputs.
In embodiments of the invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, the meaning of "a plurality of" means two or more, for example, the meaning of a plurality of processing units means two or more; the plurality of elements means two or more elements and the like.
The embodiment of the invention provides an information sharing method, which is characterized by receiving a first input of a user to a first surface of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient. The method can solve the problems of complicated information sharing process and inconvenient operation in the prior art.
Virtual Reality (VR) technology is a computer simulation system technology that creates and experiences a Virtual world. A simulation environment is generated by a computer, and a user is immersed in the environment by utilizing the system simulation of the interactive three-dimensional dynamic view and entity behaviors of the multi-source information fusion.
Augmented reality (Augmented Reality, AR) technology is a technology of integrating real world information and virtual world information together, and by superimposing virtual information content in the real world through various sensing devices, real world content and virtual information content can be simultaneously represented in the same picture and space, thereby realizing natural interaction between a user and a virtual environment.
The AR glasses move the imaging system to a place outside the glasses lenses through optical imaging elements such as optical waveguides, and the blocking of the imaging system to external vision is avoided. The optical waveguide is a high-transmission medium similar to an optical fiber for guiding light waves to propagate in the optical waveguide, integrates light output by the imaging system and reflected light of a real scene, transmits the integrated light to human eyes, processes and analyzes hand image information acquired by the camera by utilizing a computer vision algorithm, and can realize hand tracking and recognition.
Mixed Reality (MR), combining virtual information with a view of the real world, or adding a virtual representation of a real world object to a virtual environment.
The head-mounted device in the embodiment of the invention can be VR glasses, AR glasses, MR glasses, VR helmets, AR helmets, MR helmets and the like.
According to the related art, various head-mounted devices may sense an acceleration, an angular acceleration, or a direction of tilting, and display a screen corresponding to the sensed information. The head mounted device may change and display the screen based on the movement of the user.
It should be noted that, in the embodiment of the present invention, the first head-mounted device and the second head-mounted device may be the same head-mounted device (for example, AR glasses), or may be different head-mounted devices (for example, the first head-mounted device is AR glasses, and the second head-mounted device is a VR helmet), which is not limited in the embodiment of the present invention.
The virtual screen in the embodiment of the invention is a virtual reality screen, an augmented reality screen or a mixed reality screen of the head-mounted device.
The virtual screen in the embodiment of the invention can be any carrier which can be used for displaying the content projected by the projection equipment when the content is displayed by adopting the AR technology. The projection device may be a projection device using AR technology, for example, a head-mounted device or an AR device in the embodiment of the present invention.
When the AR technology is adopted to display the content on the virtual screen, the projection device may project the virtual scene acquired (or internally integrated) by the projection device, or the virtual scene and the real scene onto the virtual screen, so that the virtual screen may display the content, thereby displaying the effect of overlapping the real scene and the virtual scene to the user.
In connection with different scenarios where AR technology is applied, the virtual screen may generally be any possible carrier, such as a display screen of an electronic device (e.g. a mobile phone), a lens of AR glasses, a windshield of a car, a wall of a room, etc.
The following describes an exemplary process of displaying contents on a virtual screen using AR technology, taking a virtual screen as a display screen of an electronic device, lenses of AR glasses, and a windshield of an automobile, respectively, as examples.
In one example, when the virtual screen is a display screen of an electronic device, the projection device may be the electronic device. The electronic equipment can acquire the real scene in the area where the electronic equipment is located through the camera of the electronic equipment, the real scene is displayed on the display screen of the electronic equipment, and then the electronic equipment can project the virtual scene acquired (or internally integrated) by the electronic equipment onto the display screen of the electronic equipment, so that the virtual scene can be displayed in the real scene in a superimposed manner, and further, a user can see the effect of the superimposed real scene and virtual scene through the display screen of the electronic equipment.
In another example, when the virtual screen is a lens of AR glasses, the projection device may be the AR glasses. When the user wears the glasses, the user can see the real scene in the area where the user is located through the lenses of the AR glasses, and the AR glasses can project the virtual scene acquired (or integrated inside) onto the lenses of the AR glasses, so that the user can see the display effect after the real scene and the virtual scene are overlapped through the lenses of the AR glasses.
In yet another example, when the virtual screen is a windshield of an automobile, the projection device may be any electronic device. When a user is located in an automobile, the user can see the real scene in the area where the user is located through the windshield of the automobile, and the projection device can project the virtual scene acquired (or integrated inside) by the projection device onto the windshield of the automobile, so that the user can see the display effect of the overlapped real scene and virtual scene through the windshield of the automobile.
Of course, in the embodiment of the present invention, the specific form of the virtual screen may not be limited, and may be, for example, a non-carrier real space. In this case, when the user is located in the real space, the user can directly see the real scene in the real space, and the projection device can project the virtual scene that it acquires (or integrates internally) into the real space, so that the user can see the display effect of the real scene and the virtual scene superimposed in the real space.
The virtual object in the embodiment of the present invention is an object in virtual information, and optionally, the virtual object is content that is displayed on a screen or a lens of the head-mounted device, corresponds to a surrounding environment that the user is watching, but does not exist as a physical embodiment outside the display.
The virtual object may be an AR object. It should be noted that, the AR object may be understood as: the AR device analyzes the real object to obtain characteristic information of the real object (e.g., type information of the real object, appearance information (e.g., structure, color, shape, etc.) of the real object, and position information of the real object in space, etc.), and constructs an AR model in the AR device according to the characteristic information.
Optionally, in the embodiment of the present invention, the target virtual object may be a virtual image, a virtual pattern, a virtual character, or a virtual picture.
The head-mounted device in the embodiment of the invention can be a head-mounted device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present invention is not limited specifically.
The execution body of the information sharing method provided by the embodiment of the invention can be the head-mounted device, or can be a functional module and/or a functional entity capable of realizing the method in the head-mounted device, and specifically can be determined according to actual use requirements, and the embodiment of the invention is not limited. The information sharing method provided by the embodiment of the invention is exemplified by a head-mounted device.
Referring to fig. 1, an embodiment of the present invention provides an information sharing method, which is applied to a head-mounted device, and the method may include steps 101 to 102 described below.
Step 101, receiving a first input of a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen by a user.
Optionally, the first input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, and the like, which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The first input may also be a first operation. When the first input is executed, the first input may be a single-point input, such as a sliding input, a clicking input, etc. by using a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
Optionally, the head-mounted device comprises a camera, wherein the camera is used for collecting hand images of a user, and gesture actions of the user are obtained through a gesture recognition technology.
Step 102, in response to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object;
the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
Optionally, the camera may acquire image information and depth information, model a space by using a three-dimensional reconstruction algorithm to obtain three-dimensional space information, and the virtual object is a three-dimensional model designed by a three-dimensional modeling method.
Optionally, the first information includes, but is not limited to, files, text, etc., such as video, pictures, audio, text, etc.
Optionally, the user puts the first information into the first face of the first virtual sub-object, after the first face of the first virtual sub-object is highlighted, the user releases the first information by releasing the hands, and then the first information can be sent to the first contact.
In the embodiment of the invention, the head-mounted device receives a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient.
Optionally, the method further comprises: the N virtual sub-objects are separated by a separation identifier.
Optionally, the separation mark is an opaque line, and the opacity refers to the transparency of the first line being less than 100%; or the separation mark is a void having a certain width, etc.
Optionally, a second virtual object is included on the virtual screen;
Prior to step 101, further comprising:
step 1001, displaying M identifiers on the second virtual object, where each identifier indicates different information;
the M identifiers comprise first identifiers, the first identifiers indicate the first information, and M is a positive integer.
Optionally, the second virtual object includes different information, such as text, audio and video files, and the like, and these information are displayed on the second virtual object through corresponding identifiers, where the identifiers may be icons, symbols, and the like, and each identifier indicates different information.
Illustratively, as shown in fig. 2, the first virtual object 201 includes N virtual sub-objects, the virtual sub-objects include at least one plane, different planes are associated with different contacts, the first plane 20111 of the first virtual sub-object 2011 is associated with a first contact, the second virtual object 202 displays M identifications, each of the identifications indicates different information, and the first identification 2021 indicates the first information.
Optionally, after the first information is sent to the first contact associated with the first face of the first virtual sub-object, the first identifier is still displayed on the second virtual object.
Optionally, in step 101, the first input is configured to display the first identifier to an area where the first face of the first virtual sub-object is located.
Optionally, step 101 specifically includes step 1011 or step 1012:
Step 1011, receiving a first sub-input of the first identifier and a second sub-input of the first surface of the first virtual sub-object by a user, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first surface of the first virtual sub-object; the first sub-input includes a first gesture and the second sub-input includes a second gesture.
Optionally, the first sub-input or the second sub-input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, etc., which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The first sub-input or the second sub-input may also be the first sub-operation or the second sub-operation. When the first sub-input or the second sub-input is executed, the first sub-input or the second sub-input can be a single-point input, such as sliding input, clicking input and the like by adopting a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
Illustratively, the first sub-input is an input to click on the first identity and the second sub-input is an input to click on the first face of the first virtual sub-object. For example, the user clicks on the first identifier and then clicks on the first face of the first virtual child object.
Step 1012, receiving a first input from a user dragging the first identifier to a first side of the first virtual sub-object.
Illustratively, the user points a finger at an area of the virtual screen where the first mark is located and drags the first mark to the first face of the first virtual sub-object.
Optionally, the user directs the finger to the area of the first mark on the virtual screen, which may include, but is not limited to, the user placing the finger in the area of the first mark on the virtual screen, or the user directing the finger to the area of the first mark on the virtual screen, that is, the user does not point to the area of the first mark, but the user directs the finger to the area of the first mark at a distance from the area of the first mark.
Optionally, the dragging the first identifier to the first surface of the first virtual sub-object may include, but is not limited to, that the user drags the first identifier to the first surface of the first virtual sub-object on the virtual screen, or that the user drags the first identifier to an area corresponding to the first surface of the first virtual sub-object on the virtual screen, that is, when the first identifier is dragged to the area corresponding to the first surface of the first virtual sub-object, a projection of the first identifier in a plane where the first virtual object is located in the first surface of the first virtual sub-object. For example, drag the first logo to an area directly in front of the first face of the first virtual sub-object, the directly in front referring to a direction closer to the user.
Optionally, step 101 specifically includes:
step 1013, receiving a third sub-input of the user to the first identifier, where the third sub-input is used to control the first identifier to move to a position where the hand is located along with the hand of the user, where the position where the hand is located is an area where the first face of the first virtual sub-object is located, and the third sub-input includes a third gesture.
Optionally, the third sub-input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, etc., which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The third sub-input may also be a third sub-operation. When the third sub-input is executed, the third sub-input may be a single-point input, such as a sliding input, a clicking input, etc. by using a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
For example, the third gesture is that the hand of the user points to the area where the first identifier is located, and makes a gesture for taking the first identifier, then the first identifier follows the hand movement of the user, where the hand of the user moves, where the first identifier moves, the hand of the user moves to the first face of the first virtual sub-object, and the first identifier moves to the first face of the first virtual sub-object.
Optionally, the movement of the user's hand to the first surface of the first virtual sub-object may include, but is not limited to, movement of the user's hand to the first surface of the first virtual sub-object on the virtual screen, or movement of the user's hand to an area corresponding to the first surface of the first virtual sub-object on the virtual screen, that is, when the user's hand moves to the area corresponding to the first surface of the first virtual sub-object, the projection of the user's hand in the plane of the virtual object is located in the first surface of the first virtual sub-object. For example, the user's hand moves to an area directly in front of the first face of the first virtual sub-object, the directly in front referring to a direction closer to the user.
Optionally, the first identifier is moved to the first surface of the first virtual sub-object, which may include, but is not limited to, that the first identifier is moved to the first surface of the first virtual sub-object on the virtual screen, or that the first identifier is moved to an area corresponding to the first surface of the first virtual sub-object on the virtual screen, that is, when the first identifier is moved to the area corresponding to the first surface of the first virtual sub-object, the projection of the first identifier in the plane where the virtual object is located in the first surface of the first virtual sub-object. For example, the first marker moves to an area directly in front of the first face of the first virtual sub-object, the directly in front referring to a direction closer to the user.
In step 102, the sending the first information to the first contact associated with the first surface of the first virtual sub-object specifically includes:
Step 1021, sending first information to a first contact associated with a first face of the first virtual sub-object when a first preset condition is met;
Wherein, the meeting the first preset condition includes: and the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time period, or receives a second input of the user, wherein the second input comprises a fourth gesture.
Optionally, the second input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, and the like, which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The second input may also be a second operation. When the second input is executed, the second input may be a single-point input, such as a sliding input, a clicking input, etc. with a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
Illustratively, the fourth gesture is that the user throws the first mark towards the first face of the first virtual sub-object, or the user places the first mark on the first face of the first virtual sub-object, or the user stretches out a finger, or the like.
Optionally, the user places the first identifier on the first surface of the first virtual sub-object, which may include, but is not limited to, placing the first identifier on the first surface of the first virtual sub-object on the virtual screen, or placing the first identifier in the first virtual sub-object on the virtual screen if the first surface of the first virtual sub-object faces the user, or placing the first identifier on an area corresponding to the first surface of the first virtual sub-object on the virtual screen, that is, when the first identifier is placed in an area corresponding to the first surface of the first virtual sub-object, a projection of the first identifier in a plane in which the virtual object is located in the first surface of the first virtual sub-object. The placing means that the user releases the first mark by releasing his hands. For example, the first logo is placed in an area directly in front of the first face of the first virtual sub-object, the directly in front referring to a direction closer to the user.
For example, the hand of the user points to the area where the first mark is located, and makes a gesture for taking the first mark, then the first mark follows the hand movement of the user, the hand of the user moves to the area right in front of the first face of the first virtual sub-object, the first mark moves to the area right in front of the first face of the first virtual sub-object, the hand of the user stays in the area right in front of the first face of the first virtual sub-object for a first preset period of time, for example, stays for 3 seconds, and then the first information is sent to the first contact person associated with the first face of the first virtual sub-object.
Illustratively, as shown in fig. 3 (a), the user's hand is placed in the area where the first identifier 2021 is located, and makes a gesture for grabbing the first identifier, the first identifier is taken out from the second virtual object, the user's hand holds the first identifier and moves to the first face 20111 of the first virtual sub-object 2011, the first identifier moves along with the user's hand, as shown in fig. 3 (b), in the case that the first face of the first virtual sub-object faces the user, the user places the first identifier 2021 in the first virtual sub-object 2011, and the user releases the first identifier 2021, so that the first information can be sent to the first contact person.
In the embodiment of the invention, the user can send the first information to the first contact person through a plurality of simple gestures, and the operation is simple and quick.
Optionally, after step 102, the method further includes:
And 103, establishing call connection with the first contact person, and displaying the information content of the first information in a first space area on the virtual screen.
Optionally, the plane in which the first spatial area is located may be the same plane as the plane in which the first surface of the first virtual sub-object is located, or may be a plane different from the plane in which the first surface of the first virtual sub-object is located, for example, the plane in which the first spatial area is located is parallel to the plane in which the first surface of the first virtual sub-object is located, and the plane in which the first spatial area is located directly in front of the plane in which the first surface of the first virtual sub-object is located, where the directly in front refers to a direction closer to the user.
The user takes the first identifier out of the second virtual object, and when the first face of the first virtual sub object faces the user, the user places the first identifier in the first virtual sub object, releases the first identifier by loosening hands of the user, so that the first information can be sent to the first contact, and conversation connection is established with the first contact, and the information content of the first information is displayed in a first space area on the virtual screen.
Optionally, after the first contact agrees to receive the first information, a call connection is established with the first contact, and information content of the first information is displayed in a first space area on the virtual screen.
Optionally, in step 103, displaying the information content of the first information in a first space area on the virtual screen specifically includes:
Step 1031, displaying the information content of the first information in a first space area, and displaying a virtual identifier, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on the information content display interface of the first information.
Optionally, the head-mounted device is a first head-mounted device, the camera collects an image of a hand of a user, obtains motion information of the hand, and displays a virtual identifier representing the motion information of the hand of the user on the first information display interface, as shown in fig. 4, the virtual identifier 401 is a model of the hand of the user, a display state of the virtual identifier 401 is updated in real time according to a change of the motion or a change of a position of the hand of the user, and a display picture of the first information and the virtual identifier is synchronized to a virtual screen of a second head-mounted device of the first contact, and a synchronization picture displayed on the virtual screen of the second head-mounted device of the first contact is updated in real time according to a change of an information content of the first information or a display state of the virtual identifier, that is, the first contact can see the motion of the hand of the user and the display content of the first information, and the user can interpret and demonstrate the content of the first information to the first contact by combining voice and the motion of the hand.
Optionally, the position and the form of the virtual identifier are updated in real time according to the change of the hand motion of the user or the change of the position.
In the embodiment of the invention, after the first information is sent to the first contact person, the information content of the first information and the virtual identification indicating the hand operation position and gesture information of the user are displayed on the virtual screen, and the display pictures of the first information and the virtual identification are synchronized to the virtual screen of the second head-mounted device of the first contact person, so that the user can explain and demonstrate the content of the first information to the first contact person by combining voice and hand movements.
Optionally, the method further comprises:
step 104, receiving a third input of the user under the condition that the second information sent by the first contact is received.
Optionally, the third input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, and the like, which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The third input may also be a third operation. When the third input is executed, the third input may be a single-point input, such as a sliding input, a clicking input, etc. with a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
Optionally, the third input may be a gesture, illustratively a finger of the user pointing at the second information.
And step 105, responding to the third input, establishing a call connection with the first contact person, and displaying the information content of the second information in a second space area.
Optionally, displaying the information content of the second information in a second space area, and displaying a target virtual identifier, wherein the target virtual identifier is used for indicating the operation position and gesture information of the hand of the first contact on the information content display interface of the second information.
Optionally, the plane of the second spatial area may be the same plane as the plane of the first surface of the first virtual sub-object, or may be a plane different from the plane of the first surface of the first virtual sub-object, for example, the plane of the second spatial area is parallel to the plane of the first surface of the first virtual sub-object, and the plane of the second spatial area is located right in front of the plane of the first surface of the first virtual sub-object, where the right front refers to a direction closer to the user.
Illustratively, the user receives second information sent by the first contact, receives a third input of the user, such as clicking on the second information, establishes a call connection with the first contact, and displays information content of the second information in a second spatial region.
Optionally, the method further comprises:
Step 106, receiving a fourth input of the user to the target surfaces of the T second virtual sub-objects.
Optionally, the fourth input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, and the like, which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The fourth input may also be a fourth operation. When the fourth input is executed, the fourth input may be a single-point input, such as a sliding input, a clicking input, etc. with a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
In an exemplary embodiment, the user makes a gesture of grabbing the first identifier, the first identifier moves along with the hand of the user, as shown in fig. 5, when the target surface of the T second virtual sub-objects faces the user, the user places the first identifier 2021 in the first second virtual sub-object 2012 of the T second virtual sub-objects, but does not loosen the hand to put down the first identifier, at this time, the first identifier in the first second virtual sub-object 2012 is copied and reserved by the first identifier 20211, the user continues to make a gesture of grabbing the first identifier, the first identifier moves along with the hand of the user, the user takes the first identifier in the hand out of the first second virtual sub-object 2012, places the first identifier in the second virtual sub-object of the T second virtual sub-objects, but does not loosen the hand to put down the first identifier, at this time, the second virtual sub-object copies the first identifier in the second virtual sub-object, the user continues to make a gesture of grabbing the first identifier, the first identifier moves along with the hand of the user, the user continues to take the first identifier in the second virtual sub-object 2012 from the second virtual sub-object of the T second virtual sub-object, and places the first identifier in the second virtual sub-object of the T second virtual sub-objects, and the second virtual sub-object of the T second virtual sub-object is placed in the second virtual sub-object, and the second virtual sub-object of the second object is placed on the second virtual object.
Optionally, after the first identifier is placed in a T second virtual sub-object in the T second virtual sub-objects and the first identifier in the hand is released, the first identifier is still displayed on the second virtual object.
Illustratively, the user clicks on the first identifier and clicks on the target surface of the T second virtual sub-objects.
Illustratively, the user's finger points to the first identifier and to the target surface of the T second virtual sub-objects.
Illustratively, the user drags the first identifier to the target surfaces of the T second virtual sub-objects, respectively.
For example, the user clicks the first identifier, and in a case that the target surface of the T second virtual sub-objects faces the user, a gesture is made to pull out the T second virtual sub-objects.
In step 102, the sending the first information to the first contact associated with the first surface of the first virtual sub-object specifically includes:
Step 1022, sending the first information to the first contacts associated with the first faces of the first virtual sub-objects and T second contacts associated with the target faces of the T second virtual sub-objects, where T is a positive integer.
Illustratively, if the user's finger points to the first identifier and points to the first face of the first virtual sub-object and the target faces of the T second virtual sub-objects, then the first information is sent to the first contact and the T second contacts.
Optionally, after the first information is sent to the first contact and T second contacts associated with the first face of the first virtual sub-object, the first identifier is still displayed on the second virtual object.
Optionally, after the first information is sent to the first contact and the T second contacts associated with the first face of the first virtual sub-object, a multiparty call connection is established with the first contact and the T second contacts.
Optionally, after sending the first information to the first contact and T second contacts associated with the first face of the first virtual sub-object, information content of the first information is displayed.
In the embodiment of the invention, the user can send the first information to the contacts through some simple gestures, and establish multiparty call connection with the contacts, so that the operation is simple and quick.
Optionally, the first face of the first virtual sub-object is oriented the same as the virtual screen.
Specifically, the virtual screen faces the user, and the first face of the first virtual sub-object faces the user.
The method further comprises the steps of:
step 107, receiving a fifth input of the user to the first virtual sub-object.
Optionally, the fifth input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, etc., which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The fifth input may also be a fifth operation. When the fifth input is executed, the fifth input may be a single-point input, such as a sliding input, a clicking input, etc. with a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
Step 108, responding to the fifth input, rotating S virtual sub-objects so that the second faces of the S virtual sub-objects face the user;
Wherein S is a positive integer, and S is less than or equal to N.
Illustratively, as shown in fig. 6 (a), each virtual sub-object has 6 faces, and a user can rotate any one virtual sub-object up and down, left and right through gestures. Receiving a fifth input from the user to the first virtual sub-object, such as a gesture to rotate the first virtual sub-object to the left, the first virtual sub-object rotates to the left, and the second surface 20112 of the first virtual sub-object faces the user.
Optionally, in response to the fifth input, all virtual sub-objects are rotated, the second faces of all virtual sub-objects facing the user.
Optionally, in response to the fifth input, virtual sub-objects in the same row as the first virtual sub-object are rotated with their second faces towards the user.
Optionally, in response to the fifth input, virtual sub-objects in the same column as the first virtual sub-object are rotated with their second faces towards the user.
Optionally, different faces of the virtual sub-object represent contacts of different classifications. Illustratively, as shown in fig. 6 (b), the category of the contact corresponding to the first side 20131 of the virtual sub-object 2013 is a colleague, the category of the contact corresponding to the second side 20132 of the virtual sub-object 2013 is a friend, and the category of the contact corresponding to the second side 20133 of the virtual sub-object 2013 is a family.
Optionally, in response to the fifth input, all virtual sub-objects rotate, the second faces of all virtual sub-objects face the user, and the types of contacts corresponding to the second faces of all virtual sub-objects are the same.
In the embodiment of the invention, different surfaces of the virtual sub-object represent different kinds of contacts, and a user can rotate the virtual sub-object through a plurality of simple gestures, so that the different surfaces of the virtual sub-object face the user, and the user can see the information of the different kinds of contacts.
Optionally, the second virtual object includes at least one face, and the identifiers displayed by different faces are used for indicating different types of information.
Alternatively, the second virtual object may be rotated so that a different face of the second virtual object is facing the user, the user may see a different type of information, illustratively, as shown in fig. 6 (a), rotating the second virtual object 202 downward, the other face of the second virtual object facing the user, may see a new identification.
Optionally, the head-mounted device comprises a camera.
Prior to step 101, further comprising:
Step 1002, acquiring an image acquired by a camera.
The camera acquires images in a real environment, wherein the real environment is a real environment in a visual angle range of a user.
In step 1003, when the image includes a target object, a virtual object is displayed in a first area of a virtual screen, where the first area is an area corresponding to an area where the target object is located, and the virtual object includes the first virtual object and the second virtual object.
Optionally, in the case that the image acquired by the camera in real time does not include the target object, that is, the line of sight of the user leaves the target object, the display of the virtual object is canceled on the virtual screen, and when the line of sight of the user returns to the target object again, the virtual object is displayed on the virtual screen.
Optionally, the first area is the same as the area where the target object is located, or the first area is a part of the area where the target object is located, or the first area includes the area where the target object is located, or the first area is adjacent to the area where the target object is located, for example, the first area is located in front of, above, and the like the area where the target object is located.
Optionally, the case that the image includes a target object includes: the target object appears in the image or the image includes the target object and the environment around the target object is the target environment. For example, the target real object is a sofa, the target environment is that a tea table is arranged at the front 0.5 m of the sofa, a television is arranged at the front 1 m of the tea table, and a water dispenser is arranged at the left 0.3 m of the sofa.
In the case that the image in the real environment collected by the camera includes a target object, a virtual object is displayed in a first area of the virtual screen, and, illustratively, a table is included in the image in the real environment collected by the camera, the virtual object is displayed in the first area of the virtual screen, the first area is located on the table upper surface, or the first area is located directly above the table upper surface.
In the embodiment of the invention, the virtual object is displayed in the first area of the virtual screen by acquiring the image acquired by the camera under the condition that the image comprises the target object, so that the virtual object can be displayed when the visual angle of the user returns to the target area.
Optionally, the second area of the virtual screen includes a second identifier.
Optionally, the second identifier is used to indicate the virtual object.
Prior to step 1002, further comprising:
step 1004, receiving a sixth input of the user to the second identifier and the third spatial region.
Optionally, the sixth input includes, but is not limited to, at least one of a sliding input, a clicking input, a dragging input, a long-press input, a floating touch input, a voice input, and the like, which is specifically set according to actual needs, and the embodiment of the present invention is not limited. The sixth input may also be a sixth operation. When the sixth input is executed, the sixth input may be a single-point input, such as a sliding input, a clicking input, or the like, using a single finger; the input may be a multi-point input, such as a slide input using two fingers, a click input, or the like.
In step 1005, in response to the sixth input, a virtual object is displayed in a third area corresponding to the third spatial area on the virtual screen, where the third spatial area is an area where the target object is located.
Optionally, the third region is the same as the third spatial region, or the third region is part of the third spatial region, or the third region includes the third spatial region, or the third region is adjacent to the third spatial region, such as the third region is located in front of, above, or the like the third spatial region.
Illustratively, as shown in fig. 7 (a), the second identifier 701 is located in a second area 702 of the virtual screen, and a sixth input of the user on the second identifier 701 and the third spatial area 703 is received, for example, the second identifier is dragged to the third spatial area, and the target object is a wall, then as shown in fig. 7 (b), the virtual object 704 is displayed in the third area, which is a part of the third spatial area 703, and the user may continuously adjust the size of the virtual object by using a finger.
The head-mounted device stores information of the set area and the virtual object with the adjusted size, for example, stores space coordinates of the virtual object and surrounding environment information, the virtual object is arranged on a wall, the right half of the wall comprises a door, the left side of the wall is vertically connected with the wall of the other side of the wall comprising a window, image information of the surrounding environment of a target object can be stored, when a user's visual angle returns to the area where the target object is located, the virtual object is displayed, further, when the user's visual angle falls in a third space area, a camera collects images in the real environment, the collected images are compared with the images of the surrounding environment of the target object stored before, and when the position information and the image information of the target object, the surrounding environment are matched, the virtual object is displayed in the third area.
Optionally, the second identifier is always displayed on the virtual screen of the head-mounted device, that is, the user can see the second identifier at any time, the user can drag the second identifier to any one or more space areas, the head-mounted device can record the space coordinates of the virtual object, and the user can see the virtual object in all the space areas.
In the embodiment of the invention, by receiving the sixth input of the user on the second identifier and the third space region, and responding to the sixth input, displaying the virtual object on the third region corresponding to the third space region on the virtual screen, the virtual object can be placed in a plurality of space regions by a simple gesture, and the user can see the virtual object in all the space regions.
In the embodiment of the invention, the head-mounted device receives a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient.
As shown in fig. 8, an embodiment of the present invention provides a head-mounted device 800, the head-mounted device 800 including:
A first receiving module 801, configured to receive a first input from a user on a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; a first sending module 802, configured to send, in response to the first input, first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
Optionally, a second virtual object is included on the virtual screen; the head-mounted device further comprises: the first display module is used for displaying M marks on the second virtual object, wherein each mark indicates different information; the M identifiers comprise first identifiers, the first identifiers indicate the first information, and M is a positive integer.
Optionally, the second virtual object includes at least one face, and the identifiers displayed by different faces are used for indicating different types of information.
Optionally, the first input is configured to display the first identifier to an area where the first face of the first virtual sub-object is located.
Optionally, the first receiving module is specifically configured to: receiving a first sub-input of a user for the first identifier and a second sub-input of a first face of the first virtual sub-object, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first face of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture; or receiving a first input that a user drags the first identifier to the first face of the first virtual sub-object.
Optionally, the first receiving module is specifically configured to: receiving a third sub-input of the user to the first identifier, wherein the third sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, and the position where the hand is located is an area where the first face of the first virtual sub-object is located, and the third sub-input comprises a third gesture; the first sending module includes: the first sending unit is used for sending first information to a first contact person associated with the first face of the first virtual sub-object under the condition that a first preset condition is met; wherein, the meeting the first preset condition includes: and the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time period, or receives a second input of the user, wherein the second input comprises a fourth gesture.
Optionally, the headset further comprises: the first call module is used for establishing call connection with the first contact person; and the second display module is used for displaying the information content of the first information in the first space area on the virtual screen.
Optionally, the second display module is specifically configured to: and displaying the information content of the first information in a first space area, and displaying a virtual identifier, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on an information content display interface of the first information.
Optionally, the headset further comprises: the second receiving module is used for receiving a third input of the user under the condition that second information sent by the first contact person is received; a second call module for establishing a call connection with the first contact in response to the third input; and the third display module is used for displaying the information content of the second information in the second space area.
Optionally, the headset further comprises: the third receiving module is used for receiving fourth input of a user to the target surfaces of the T second virtual sub-objects; the first sending module specifically includes: and the second sending unit is used for sending the first information to the first contact person and T second contact persons associated with the first surface of the first virtual sub-object, the T second contact persons are associated with the target surfaces of the T second virtual sub-objects, and T is a positive integer.
Optionally, the first face of the first virtual sub-object and the virtual screen face in the same direction; the head-mounted device further comprises: a fourth receiving module, configured to receive a fifth input from a user to the first virtual sub-object; a rotation module for rotating S virtual sub-objects such that second faces of the S virtual sub-objects face the user in response to the fifth input; wherein S is a positive integer, and S is less than or equal to N.
Optionally, the head-mounted device comprises a camera; the head-mounted device further comprises: the acquisition module is used for acquiring images acquired by the camera; and the fourth display module is used for displaying a virtual object in a first area of a virtual screen under the condition that the image comprises a target object, wherein the first area is an area corresponding to the area where the target object is located, and the virtual object comprises the first virtual object and the second virtual object.
Optionally, the second area of the virtual screen includes a second identifier; the head-mounted device further comprises: a fifth receiving module, configured to receive a sixth input from a user on the second identifier and the third spatial region; and the fifth display module is used for responding to the sixth input, displaying a virtual object in a third area corresponding to the third space area on the virtual screen, wherein the third space area is an area where a target object is located.
Optionally, the N virtual sub-objects are separated by a separation identifier.
The headset device provided by the embodiment of the invention can realize each process realized by the headset device in the embodiment of the method, and in order to avoid repetition, the description is omitted here.
In the embodiment of the invention, the head-mounted device receives a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient.
Fig. 9 is a schematic hardware structure of a headset device implementing various embodiments of the present invention, and as shown in fig. 9, the headset device 900 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, processor 910, and power source 911. It will be appreciated by those skilled in the art that the headset structure shown in fig. 9 is not limiting of the headset, and that the headset may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In embodiments of the invention, the head-mounted device includes, but is not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, or the like.
Wherein, the user input unit 907 is configured to receive a first input of a first face of a first virtual sub-object of a first virtual object displayed on the virtual screen by a user; a processor 910 configured to send, in response to the first input, first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
The embodiment of the invention provides a head-mounted device, which can be used for receiving a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen; responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object; the first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, and N is a positive integer, so that information can be shared rapidly, and the operation is simple and convenient.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from a base station and then processing the downlink data by the processor 910; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 may also communicate with networks and other devices via a wireless communication system.
The head-mounted device provides wireless broadband internet access to the user via the network module 902, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may also provide audio output (e.g., call signal reception sound, message reception sound, etc.) related to a specific function performed by the head-mounted device 900. The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive an audio or video signal. The input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, the graphics processor 9041 processing image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphics processor 9041 may be stored in memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 901 in the case of a telephone call mode.
The head mounted device 900 also includes at least one sensor 905, such as a gesture sensor, light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 9061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 9061 and/or the backlight when the head-mounted device 900 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the head-mounted equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 905 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 906 is used to display information input by a user or information provided to the user. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like. The display unit 906 may include a hologram device, which may form a three-dimensional (3D) image (hologram) in air by using light interference, a projector (not shown in the drawing). The projector may display an image by projecting light onto a screen. The screen may be located inside or outside the head-mounted device.
The user input unit 907 is operable to receive input numeric or character information, and to generate key signal inputs related to user settings and function controls of the head-mounted device. In particular, the user input unit 907 includes a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (such as operations of the user on touch panel 9071 or thereabout using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, and receives and executes commands sent by the processor 910. In addition, the touch panel 9071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 907 may also include other input devices 9072 in addition to the touch panel 9071. In particular, other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 910 to determine a type of touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two separate components to implement the input and output functions of the head-mounted device, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the head-mounted device, which is not limited herein.
The interface unit 908 is an interface to which an external device is connected with the head-mounted apparatus 900. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the headset 900 or may be used to transmit data between the headset 900 and an external device.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 909 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Processor 910 is a control center of the head-mounted device, connects the various parts of the entire head-mounted device using various interfaces and wires, and performs various functions and processes of the head-mounted device by running or executing software programs and/or modules stored in memory 909, and invoking data stored in memory 909, thereby performing overall monitoring of the head-mounted device. Processor 910 may include one or more processing units; alternatively, the processor 910 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 910, and that the processor 910 may detect a gesture of a user and determine a control command corresponding to the gesture in accordance with an embodiment of the present invention.
The head-mounted device 900 may also include a power supply 911 (e.g., a battery) for powering the various components, and alternatively, the power supply 911 may be logically connected to the processor 910 by a power management system, thereby performing functions such as managing charging, discharging, and power consumption by the power management system.
In addition, the head-mounted device 900 includes some functional modules, which are not shown, and are not described herein.
Optionally, the embodiment of the present invention further provides a head-mounted device, which includes a processor 910, a memory 909, and a computer program stored in the memory 909 and capable of running on the processor 910, where the computer program when executed by the processor 910 implements the respective processes of the above embodiment of the information sharing method, and the same technical effects can be achieved, so that repetition is avoided and redundant description is omitted here.
Alternatively, in the embodiment of the present invention, the head-mounted device in the above embodiment may be an AR device. Specifically, when the head-mounted device in the above embodiment is an AR device, the AR device may include all or part of the functional modules in the above head-mounted device. Of course, the AR device may also include functional modules not included in the head-mounted device described above.
It may be appreciated that in the embodiment of the present invention, when the head-mounted device in the above embodiment is an AR device, the head-mounted device may be a head-mounted device integrated with AR technology. The AR technology refers to a technology for combining a real scene and a virtual scene. The visual function of the human can be restored by adopting the AR technology, so that the human can experience the sense of combining the real scene and the virtual scene through the AR technology, and further, the human can better experience the sense of being in the scene.
Taking AR equipment as an example of AR glasses, when a user wears the AR glasses, a scene viewed by the user is generated by processing through AR technology, that is, the virtual scene can be displayed in a superimposed manner in a real scene through the AR technology. When the user operates on the content displayed by the AR glasses, the user can see that the AR glasses "strip" the real scene, so that a more realistic face is displayed to the user. For example, a user can observe only the case of the case when visually observing one case, but after wearing the AR glasses, the user can directly observe the internal structure of the case through the AR glasses.
The AR device can comprise a camera, so that the AR device can display and interact with a virtual picture on the basis of the picture shot by the camera. For example, in the embodiment of the present invention, the AR device may synchronize virtual picture information generated when a user uses the AR device to perform entertainment activities to a display screen of another AR device, so that virtual picture sharing between AR devices can be achieved.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-described information sharing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. The computer readable storage medium is, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a head-mounted device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (24)

1. An information sharing method applied to a head-mounted device is characterized by comprising the following steps:
Receiving a first input of a user to a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen;
responsive to the first input, sending first information to a first contact associated with a first face of the first virtual sub-object;
The first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, the first surface of the first virtual sub-object faces the user, and N is a positive integer;
The virtual screen comprises a second virtual object;
Before receiving the first input of the user to the first face of the first virtual sub-object of the first virtual object displayed on the virtual screen, the method further includes:
displaying M marks on the second virtual object, wherein each mark indicates different information;
the M identifications comprise first identifications, the first identifications indicate the first information, and M is a positive integer;
after the first information is sent to the first contact associated with the first face of the first virtual sub-object, the method further includes:
establishing call connection with the first contact person, and displaying the information content of the first information in a first space area on a virtual screen;
the head-mounted device is a first head-mounted device; the first space area on the virtual screen displays the information content of the first information, including:
displaying information content of the first information in a first space area, displaying a virtual identifier, and synchronizing the display frames of the first information and the virtual identifier to a virtual screen of a second head-mounted device of the first contact, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on an information content display interface of the first information.
2. The method of claim 1, wherein the second virtual object includes at least one facet, and wherein the different facet displays identifiers are used to indicate different types of information.
3. The method of claim 1, wherein the first input is to display the first identification to an area where the first face of the first virtual sub-object is located.
4. A method according to claim 3, wherein receiving a first input by a user of a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen comprises:
Receiving a first sub-input of a user for the first identifier and a second sub-input of a first face of the first virtual sub-object, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first face of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture;
Or receiving a first input that a user drags the first identifier to the first face of the first virtual sub-object.
5. A method according to claim 3, wherein receiving a first input by a user of a first face of a first virtual sub-object of a first virtual object displayed on a virtual screen comprises:
Receiving a third sub-input of the user to the first identifier, wherein the third sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, and the position where the hand is located is an area where the first face of the first virtual sub-object is located, and the third sub-input comprises a third gesture;
The sending the first information to the first contact associated with the first face of the first virtual sub-object includes:
under the condition that a first preset condition is met, first information is sent to a first contact person associated with a first face of the first virtual sub-object;
Wherein, the meeting the first preset condition includes: and the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time period, or receives a second input of the user, wherein the second input comprises a fourth gesture.
6. The method according to claim 1, wherein the method further comprises:
receiving a third input of a user under the condition that second information sent by the first contact person is received;
and responding to the third input, establishing a call connection with the first contact person, and displaying the information content of the second information in a second space area.
7. The method according to claim 1, wherein the method further comprises:
receiving fourth input of a user to the target surfaces of the T second virtual sub-objects;
The sending the first information to the first contact associated with the first face of the first virtual sub-object includes:
And sending the first information to a first contact person and T second contact persons associated with the first surface of the first virtual sub-object, wherein the T second contact persons are associated with the target surfaces of the T second virtual sub-objects, and T is a positive integer.
8. The method of claim 1, wherein the first face of the first virtual sub-object is oriented the same as the virtual screen;
the method further comprises the steps of:
receiving a fifth input of a user to the first virtual sub-object;
In response to the fifth input, rotating S virtual sub-objects such that second faces of the S virtual sub-objects face the user;
Wherein S is a positive integer, and S is less than or equal to N.
9. The method of claim 1, wherein the headset comprises a camera;
Before receiving the first input of the user to the first face of the first virtual sub-object of the first virtual object displayed on the virtual screen, the method further includes:
acquiring an image acquired by a camera;
And displaying a virtual object in a first area of a virtual screen under the condition that the image comprises a target object, wherein the first area is an area corresponding to an area where the target object is located, and the virtual object comprises the first virtual object and the second virtual object.
10. The method of claim 9, wherein the second area of the virtual screen comprises a second logo;
Before the image acquired by the camera is acquired, the method further comprises the following steps:
Receiving a sixth input of a user to the second identifier and the third spatial region;
And responding to the sixth input, displaying a virtual object in a third area corresponding to the third space area on the virtual screen, wherein the third space area is an area where a target object is located.
11. The method of claim 1, wherein the N virtual sub-objects are separated by a separation identifier.
12. A head-mounted device, comprising:
The first receiving module is used for receiving a first input of a first face of a first virtual sub-object of a first virtual object displayed on the virtual screen by a user;
a first sending module, configured to send first information to a first contact associated with a first face of the first virtual sub-object in response to the first input;
The first virtual object is a three-dimensional virtual object, the first virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-object comprises at least one surface, different surfaces are associated with different contacts, the first surface of the first virtual sub-object faces the user, and N is a positive integer;
The virtual screen comprises a second virtual object;
The head-mounted device further comprises:
the first display module is used for displaying M marks on the second virtual object, wherein each mark indicates different information;
the M identifications comprise first identifications, the first identifications indicate the first information, and M is a positive integer;
Further comprises:
The first call module is used for establishing call connection with the first contact person;
The second display module is used for displaying the information content of the first information in a first space area on the virtual screen;
the head-mounted device is a first head-mounted device; the second display module is specifically configured to:
displaying information content of the first information in a first space area, displaying a virtual identifier, and synchronizing the display frames of the first information and the virtual identifier to a virtual screen of a second head-mounted device of the first contact, wherein the virtual identifier is used for indicating the operation position and gesture information of the hand of the user on an information content display interface of the first information.
13. The head-mounted device of claim 12, wherein the second virtual object includes at least one face, and wherein the different face displays identifiers are used to indicate different types of information.
14. The head-mounted device of claim 12, wherein the first input is to display the first identification to an area where the first face of the first virtual sub-object is located.
15. The head-mounted device according to claim 14, wherein the first receiving module is specifically configured to:
Receiving a first sub-input of a user for the first identifier and a second sub-input of a first face of the first virtual sub-object, wherein the first sub-input is used for selecting the first identifier, and the second sub-input is used for selecting the first face of the first virtual sub-object; the first sub-input comprises a first gesture and the second sub-input comprises a second gesture;
Or receiving a first input that a user drags the first identifier to the first face of the first virtual sub-object.
16. The head-mounted device according to claim 14, wherein the first receiving module is specifically configured to:
Receiving a third sub-input of the user to the first identifier, wherein the third sub-input is used for controlling the first identifier to move to a position where the hand is located along with the hand of the user, and the position where the hand is located is an area where the first face of the first virtual sub-object is located, and the third sub-input comprises a third gesture;
the first sending module includes:
the first sending unit is used for sending first information to a first contact person associated with the first face of the first virtual sub-object under the condition that a first preset condition is met;
Wherein, the meeting the first preset condition includes: and the hand of the user stays in the area where the first face of the first virtual sub-object is located for a first preset time period, or receives a second input of the user, wherein the second input comprises a fourth gesture.
17. The head-mounted device of claim 12, further comprising:
The second receiving module is used for receiving a third input of the user under the condition that second information sent by the first contact person is received;
a second call module for establishing a call connection with the first contact in response to the third input;
and the third display module is used for displaying the information content of the second information in the second space area.
18. The head-mounted device of claim 12, further comprising:
The third receiving module is used for receiving fourth input of a user to the target surfaces of the T second virtual sub-objects;
The first sending module specifically includes:
And the second sending unit is used for sending the first information to the first contact person and T second contact persons associated with the first surface of the first virtual sub-object, the T second contact persons are associated with the target surfaces of the T second virtual sub-objects, and T is a positive integer.
19. The head-mounted device of claim 12, wherein the first face of the first virtual sub-object is oriented the same as the virtual screen;
The head-mounted device further comprises:
A fourth receiving module, configured to receive a fifth input from a user to the first virtual sub-object;
a rotation module for rotating S virtual sub-objects such that second faces of the S virtual sub-objects face the user in response to the fifth input;
Wherein S is a positive integer, and S is less than or equal to N.
20. The head-mounted device of claim 12, wherein the head-mounted device comprises a camera;
The head-mounted device further comprises:
the acquisition module is used for acquiring images acquired by the camera;
And the fourth display module is used for displaying a virtual object in a first area of a virtual screen under the condition that the image comprises a target object, wherein the first area is an area corresponding to the area where the target object is located, and the virtual object comprises the first virtual object and the second virtual object.
21. The head-mounted device of claim 20, wherein the second area of the virtual screen comprises a second logo;
The head-mounted device further comprises:
A fifth receiving module, configured to receive a sixth input from a user on the second identifier and the third spatial region;
And the fifth display module is used for responding to the sixth input, displaying a virtual object in a third area corresponding to the third space area on the virtual screen, wherein the third space area is an area where a target object is located.
22. The head-mounted device of claim 12, wherein the N virtual sub-objects are separated by a separation identity.
23. A head-mounted device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the information sharing method according to any one of claims 1 to 11.
24. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the information sharing method according to any one of claims 1 to 11.
CN202010031689.XA 2020-01-13 2020-01-13 Information sharing method, head-mounted device and medium Active CN111258482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031689.XA CN111258482B (en) 2020-01-13 2020-01-13 Information sharing method, head-mounted device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031689.XA CN111258482B (en) 2020-01-13 2020-01-13 Information sharing method, head-mounted device and medium

Publications (2)

Publication Number Publication Date
CN111258482A CN111258482A (en) 2020-06-09
CN111258482B true CN111258482B (en) 2024-05-10

Family

ID=70946853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031689.XA Active CN111258482B (en) 2020-01-13 2020-01-13 Information sharing method, head-mounted device and medium

Country Status (1)

Country Link
CN (1) CN111258482B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113347526B (en) * 2021-07-08 2022-11-22 歌尔科技有限公司 Sound effect adjusting method and device of earphone and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102325215A (en) * 2011-05-31 2012-01-18 宇龙计算机通信科技(深圳)有限公司 Message sending method and mobile terminal
KR20130069187A (en) * 2011-12-16 2013-06-26 엘지전자 주식회사 Mobile terminal and operating method thereof
CN104168351A (en) * 2013-05-20 2014-11-26 北京三星通信技术研究有限公司 Method and device for processing contact information
CN107357416A (en) * 2016-12-30 2017-11-17 长春市睿鑫博冠科技发展有限公司 A kind of human-computer interaction device and exchange method
CN108604119A (en) * 2016-05-05 2018-09-28 谷歌有限责任公司 Virtual item in enhancing and/or reality environment it is shared
CN108834083A (en) * 2018-05-22 2018-11-16 朱小军 A kind of multi-function telephones communication system
CN109189288A (en) * 2017-09-05 2019-01-11 南京知行新能源汽车技术开发有限公司 Data processing system, computer implemented method and non-transitory machine-readable media
CN109471742A (en) * 2018-11-07 2019-03-15 Oppo广东移动通信有限公司 Information processing method, device, electronic equipment and readable storage medium storing program for executing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101555055B1 (en) * 2008-10-10 2015-09-22 엘지전자 주식회사 Mobile terminal and display method thereof
US8132120B2 (en) * 2008-12-29 2012-03-06 Verizon Patent And Licensing Inc. Interface cube for mobile device
US20130263059A1 (en) * 2012-03-28 2013-10-03 Innovative Icroms, S.L. Method and system for managing and displaying mutlimedia contents
US20150379770A1 (en) * 2014-06-27 2015-12-31 David C. Haley, JR. Digital action in response to object interaction

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102325215A (en) * 2011-05-31 2012-01-18 宇龙计算机通信科技(深圳)有限公司 Message sending method and mobile terminal
KR20130069187A (en) * 2011-12-16 2013-06-26 엘지전자 주식회사 Mobile terminal and operating method thereof
CN104168351A (en) * 2013-05-20 2014-11-26 北京三星通信技术研究有限公司 Method and device for processing contact information
CN108604119A (en) * 2016-05-05 2018-09-28 谷歌有限责任公司 Virtual item in enhancing and/or reality environment it is shared
CN107357416A (en) * 2016-12-30 2017-11-17 长春市睿鑫博冠科技发展有限公司 A kind of human-computer interaction device and exchange method
CN109189288A (en) * 2017-09-05 2019-01-11 南京知行新能源汽车技术开发有限公司 Data processing system, computer implemented method and non-transitory machine-readable media
CN108834083A (en) * 2018-05-22 2018-11-16 朱小军 A kind of multi-function telephones communication system
CN109471742A (en) * 2018-11-07 2019-03-15 Oppo广东移动通信有限公司 Information processing method, device, electronic equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111258482A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
US11995774B2 (en) Augmented reality experiences using speech and text captions
US10733781B2 (en) Virtual reality
CN111258420B (en) Information interaction method, head-mounted device and medium
US20160004320A1 (en) Tracking display system, tracking display program, tracking display method, wearable device using these, tracking display program for wearable device, and manipulation method for wearable device
CN102779000B (en) User interaction system and method
EP3676745B1 (en) Privacy screen for computer simulated reality
KR20220032059A (en) Touch free interface for augmented reality systems
US11573632B2 (en) Eyewear including shared object manipulation AR experiences
US11195341B1 (en) Augmented reality eyewear with 3D costumes
CN112817453A (en) Virtual reality equipment and sight following method of object in virtual reality scene
US11803239B2 (en) Eyewear with shared gaze-responsive viewing
CN111352505B (en) Operation control method, head-mounted device, and medium
CN111240483B (en) Operation control method, head-mounted device, and medium
CN111258482B (en) Information sharing method, head-mounted device and medium
KR20180113115A (en) Mobile terminal and method for controlling the same
CN110717993A (en) Interaction method, system and medium of split type AR glasses system
CN111093033A (en) Information processing method and device
CN111246014B (en) Communication method, head-mounted device, and medium
CN111143799A (en) Unlocking method and electronic equipment
CN111104656A (en) Unlocking method and electronic equipment
CN111208903B (en) Information transmission method, wearable device and medium
WO2024064230A1 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant