CN111246014B - Communication method, head-mounted device, and medium - Google Patents

Communication method, head-mounted device, and medium Download PDF

Info

Publication number
CN111246014B
CN111246014B CN202010031847.1A CN202010031847A CN111246014B CN 111246014 B CN111246014 B CN 111246014B CN 202010031847 A CN202010031847 A CN 202010031847A CN 111246014 B CN111246014 B CN 111246014B
Authority
CN
China
Prior art keywords
virtual
input
virtual sub
contact
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010031847.1A
Other languages
Chinese (zh)
Other versions
CN111246014A (en
Inventor
陈喆
杨其豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010031847.1A priority Critical patent/CN111246014B/en
Publication of CN111246014A publication Critical patent/CN111246014A/en
Application granted granted Critical
Publication of CN111246014B publication Critical patent/CN111246014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons

Abstract

The embodiment of the invention discloses a communication method, a head-mounted device and a medium, relates to the technical field of communication, and can solve the problems of more complicated communication process and inconvenient operation in the prior art. The method comprises the following steps: receiving a first input of a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid conversation with a target contact, so that the operation is simple and convenient.

Description

Communication method, head-mounted device, and medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a communication method, a head-mounted device and a medium.
Background
In the prior art, the functions of electronic equipment are more and more, the number of application programs is very large, if a call needs to be made with a certain contact person, the call can be completed only by entering a call page through operations of clicking, swiping and the like on a screen by fingers, and the process is complex and inconvenient to operate.
Disclosure of Invention
The embodiment of the invention provides a call method, which can solve the problems of more complicated call process and inconvenient operation in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a call method, including:
receiving a first input of a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen;
in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object;
establishing a call connection with a target contact, wherein the target contact comprises the first contact;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
In a second aspect, an embodiment of the present invention provides a head-mounted device, including:
the first receiving module is used for receiving a first input of a user to a first surface of a first virtual sub-object of a virtual object displayed on a virtual screen;
a first sending module, configured to send, in response to the first input, a call request message to a first contact associated with a first side of the first virtual sub-object;
the first call module is used for establishing call connection with a target contact person, and the target contact person comprises the first contact person;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
In a third aspect, an embodiment of the present invention provides a head-mounted device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the call method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the call method according to the first aspect.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid conversation with a target contact, so that the operation is simple and convenient.
Drawings
Fig. 1 is a flowchart of a call method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a call connection established between a first contact and a call method according to an embodiment of the present invention;
fig. 3(a) is a schematic diagram of a rotating first virtual sub-object of a call method according to an embodiment of the present invention;
fig. 3(b) is a schematic diagram illustrating that each face of a virtual sub-object of the call method represents different types of contacts according to the embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a multi-party call connection established between a first contact and a second contact in a call method according to an embodiment of the present invention;
fig. 5(a) is a schematic diagram of setting a display position of a virtual object in a call method according to an embodiment of the present invention;
fig. 5(b) is a schematic diagram of a virtual object displayed in a target area in the call method according to the embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a call connection established between a first contact and a fourth contact according to the call method provided in the embodiment of the present invention;
fig. 7(a) is one of schematic diagrams illustrating switching of a view angle of a headset according to a communication method provided in an embodiment of the present invention;
fig. 7(b) is a second schematic diagram illustrating a perspective of a headset device being switched according to a communication method provided by the embodiment of the invention;
fig. 8 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention;
fig. 9 is a hardware schematic diagram of a head-mounted device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides a communication method, which comprises the steps of receiving first input of a user to a first surface of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers are associated with the different surfaces. The problem that the conversation process is complex and inconvenient to operate in the prior art can be solved.
Virtual Reality (VR) technology is a computer simulation system technology that creates and experiences a Virtual world. It utilizes a computer to create a simulated environment into which a user is immersed using a systematic simulation of interactive three-dimensional dynamic views and physical behaviors with multi-source information fusion.
Augmented Reality (AR) technology is a technology that integrates real world information and virtual world information, and virtual information content is superimposed in the real world through various sensing devices, so that real world content and virtual information content can be simultaneously embodied in the same picture and space, and natural interaction between a user and a virtual environment is realized.
The AR glasses move the imaging system to a place outside the glasses lens through optical imaging elements such as optical waveguides and the like, so that the imaging system is prevented from blocking external sight. The optical waveguide is a high-transmittance medium similar to an optical fiber for guiding light waves to propagate in the optical waveguide, light output by an imaging system and reflected light of a real scene are integrated and transmitted to human eyes, and hand image information acquired by a camera is processed and analyzed by using a computer vision algorithm, so that hand tracking and recognition can be realized.
Mixed Reality (MR), combining virtual information with a view of the real world, or adding a virtual representation of a real world object to a virtual environment.
The head-mounted device in the embodiment of the invention can be VR glasses, AR glasses, MR glasses, or VR helmet, AR helmet, MR helmet, etc.
According to the related art, various head-mounted devices may sense a direction of acceleration, angular acceleration, or inclination, and display a screen corresponding to the sensed information. The head mounted device may change and display the screen based on the user's movement.
It should be noted that, in the embodiment of the present invention, the first head-mounted device and the second head-mounted device may be the same head-mounted device (e.g., both AR glasses), or may be different head-mounted devices (e.g., the first head-mounted device is AR glasses, and the second head-mounted device is a VR helmet), which is not limited in this embodiment of the present invention.
The virtual screen in the embodiment of the invention is a virtual reality screen, an augmented reality screen or a mixed reality screen of the head-mounted equipment.
The virtual screen in the embodiment of the present invention may be any carrier that can be used to display content projected by a projection device when content is displayed by using AR technology. The projection device may be a projection device using AR technology, such as a head-mounted device or an AR device in the embodiment of the present invention.
When displaying content on the virtual screen by using the AR technology, the projection device may project a virtual scene acquired by (or internally integrated with) the projection device, or a virtual scene and a real scene onto the virtual screen, so that the virtual screen may display the content, thereby showing an effect of superimposing the real scene and the virtual scene to a user.
In connection with different scenarios of AR technology applications, the virtual screen may generally be a display screen of an electronic device (e.g. a mobile phone), a lens of AR glasses, a windshield of a car, a wall of a room, etc. any possible carrier.
The following describes an exemplary process of displaying content on a virtual screen by using AR technology, by taking the virtual screen as a display screen of an electronic device, a lens of AR glasses, and a windshield of an automobile as examples.
In one example, when the virtual screen is a display screen of an electronic device, the projection device may be the electronic device. The electronic equipment can acquire a real scene in the area where the electronic equipment is located through the camera of the electronic equipment, the real scene is displayed on the display screen of the electronic equipment, then the electronic equipment can project a virtual scene acquired by the electronic equipment (or internally integrated) onto the display screen of the electronic equipment, so that the virtual scene can be displayed in a superposition mode in the real scene, and a user can see the effect of the real scene and the virtual scene after superposition through the display screen of the electronic equipment.
In another example, when the virtual screen is a lens of AR glasses, the projection device may be the AR glasses. When the user wears the glasses, the user can see the real scene in the area where the user is located through the lenses of the AR glasses, and the AR glasses can project the acquired (or internally integrated) virtual scene onto the lenses of the AR glasses, so that the user can see the display effect of the real scene and the virtual scene after superposition through the lenses of the AR glasses.
In yet another example, when the virtual screen is a windshield of an automobile, the projection device may be any electronic device. When the user is located in the automobile, the user can see the real scene in the area where the user is located through the windshield of the automobile, and the projection device can project the acquired (or internally integrated) virtual scene onto the windshield of the automobile, so that the user can see the display effect of the real scene and the virtual scene after superposition through the windshield of the automobile.
Of course, in the embodiment of the present invention, the specific form of the virtual screen may not be limited, for example, it may be a non-carrier real space. In this case, when the user is located in the real space, the user can directly see the real scene in the real space, and the projection device can project the acquired (or internally integrated) virtual scene into the real space, so that the user can see the display effect of the real scene and the virtual scene after superposition in the real space.
The virtual object in the embodiment of the present invention is an object in virtual information, and optionally, the virtual object is content displayed on a screen or a lens of the head-mounted device, which corresponds to the surrounding environment the user is viewing, but is not present as a physical embodiment outside the display.
The virtual object may be an AR object. It should be noted that the AR object may be understood as: the AR device analyzes the real object to obtain feature information of the real object (e.g., type information of the real object, appearance information of the real object (e.g., structure, color, shape, etc.), position information of the real object in space, etc.), and constructs an AR model in the AR device according to the feature information.
Optionally, in this embodiment of the present invention, the target virtual object may specifically be a virtual image, a virtual pattern, a virtual character, a virtual picture, or the like.
The head-mounted device in the embodiment of the invention can be a head-mounted device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
An execution main body of the communication method provided by the embodiment of the present invention may be the above-mentioned head-mounted device, or may also be a functional module and/or a functional entity capable of implementing the method in the head-mounted device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a headset as an example to exemplarily explain a call method provided by the embodiment of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a conversation method applied to a head-mounted device, and the conversation method may include steps 101 to 103 described below.
Step 101, receiving a first input of a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen.
Optionally, the first input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The first input may also be a first operation. When the first input is executed, the first input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Optionally, the head-mounted device includes a camera, and the camera is configured to collect a hand image of the user and obtain a gesture action of the user through a gesture recognition technology.
And 102, responding to the first input, and sending a call request message to a first contact associated with the first surface of the first virtual sub-object.
103, establishing a call connection with a target contact person, wherein the target contact person comprises the first contact person;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
Optionally, the camera may acquire image information and depth information, and a three-dimensional reconstruction algorithm is used to model a space to obtain three-dimensional space information, where the virtual object is a three-dimensional model designed by a three-dimensional modeling method.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid conversation with a target contact, so that the operation is simple and convenient.
Optionally, the method further comprises: the N virtual sub-objects are separated by a separation identifier.
Optionally, the separation mark is an opaque line, where opaque means that the transparency of the first line is less than 100%; or the separation marks are voids having a certain width, etc.
Optionally, information associated with the first contact is displayed on the first side.
Optionally, step 103 specifically includes:
and step 1031, establishing a single-party call connection with the first contact person.
Illustratively, as shown in fig. 2, the virtual object 201 includes N virtual sub-objects, the virtual sub-objects include at least one surface, different surfaces are associated with different contacts, a first input of a user to the first surface 20111 of the first virtual sub-object 2011 of the virtual object 201 displayed on the virtual screen is received, for example, in a case that the first surface of the first virtual sub-object faces the user, a call request message may be sent to the first contact associated with the first surface of the first virtual sub-object by pulling the first virtual sub-object out through a gesture, and a one-way call connection with the first contact is established, that is, a call connection is established between the user and the first contact.
Optionally, the head-mounted device recognizes a gesture operation of the user on the first face of the first virtual sub-object in the first direction, and sends a call request message to the first contact. Illustratively, in the case that the first face of the first virtual sub-object faces the user, the finger of the user points to the first virtual sub-object, and moves the hand backward, the call request message is sent to the first contact, and the backward direction is a direction away from the first virtual sub-object.
Optionally, the head-mounted device recognizes a gesture operation of the user on a first face of the first virtual sub-object in a first direction, the first virtual sub-object moves to the first direction, that is, the first virtual sub-object is pulled out, and sends the call request message to the first contact.
Optionally, the distance that the first virtual sub-object moves in the first direction is associated with the distance that the user's hand moves in the first direction.
Optionally, when the distance that the first virtual sub-object moves in the first direction is greater than or equal to a preset distance, a call request message is sent to the first contact.
In the embodiment of the invention, the moving distance of the first virtual sub-object in the first direction is associated with the moving distance of the hand of the user in the first direction, and the call request message is sent to the first contact when the moving distance of the first virtual sub-object in the first direction is greater than or equal to the preset distance, so that the false triggering of the user can be prevented.
In the embodiment of the invention, the user can send the call request message to the first contact through some simple gestures, and the operation is simple and quick.
Optionally, the first side of the first virtual sub-object is oriented in the same direction as the virtual screen.
Specifically, the virtual screen faces the user, and the first side of the first virtual sub-object faces the user.
The method further comprises the following steps:
and 104, receiving a second input of the user to the first virtual sub-object.
Optionally, the second input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The second input may also be a second operation. When the second input is executed, the second input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Step 105, responding to the second input, rotating the M virtual sub-objects to enable second faces of the M virtual sub-objects to face the user; wherein M is a positive integer and M is less than or equal to N.
Illustratively, as shown in fig. 3(a), each virtual sub-object has 6 planes, and a user can rotate any one virtual sub-object up, down, left and right through gestures. Receiving a second input to the first virtual sub-object from the user, such as a gesture to rotate the first sub-object to the left, the first virtual sub-object rotates to the left with the second face 20112 of the first virtual sub-object facing the user.
Optionally, in response to the second input, all virtual sub-objects are rotated with their second faces facing the user.
Optionally, in response to the second input, the virtual sub-objects in the same row as the first virtual sub-object are both rotated with their second faces facing the user.
Optionally, in response to the second input, the virtual sub-objects in the same column as the first virtual sub-object are both rotated with their second faces facing the user.
Optionally, different faces of the virtual sub-object represent different categories of contacts. Illustratively, as shown in fig. 3(b), the category of the contact corresponding to the first side 20121 of the virtual sub-object 2012 is a colleague, the category of the contact corresponding to the second side 20122 of the virtual sub-object 2012 is a friend, and the category of the contact corresponding to the second side 20123 of the virtual sub-object 2012 is a family.
Optionally, in response to the second input, all of the virtual sub-objects are rotated with their second faces facing the user, and the contacts corresponding to the second faces of all of the virtual sub-objects are of the same kind.
In the embodiment of the invention, different surfaces of the virtual sub-object represent different types of contacts, and a user can rotate the virtual sub-object through some simple gestures, so that the different surfaces of the virtual sub-object face the user, and the user can see different types of contact information.
Optionally, after step 105, further comprising:
and 106, displaying target information on the second sides of the M virtual sub-objects, wherein the target information comprises information of M contacts related to the second sides of the M virtual sub-objects.
Optionally, the information of the contact may include, but is not limited to: the name of the contact, the address of the contact, the head portrait of the contact, the personal signature of the contact, the mailbox of the contact, and the like.
Optionally, the method further comprises:
and step 107, receiving a third input of the user to the target surface of the second virtual sub-object.
Optionally, the third input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The third input may also be a third operation. When the third input is executed, the third input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
And step 108, responding to the third input, and sending a call request message to a second contact associated with the target surface of the second virtual sub-object.
Step 103 specifically comprises:
step 1032, establishing a multi-party call connection with the first contact and the second contact.
For example, after the user establishes a single-party call connection with the first contact, a third input of the user to a target surface of the second virtual sub-object is received, as shown in fig. 4, for example, when the first surface 20131 of the second virtual sub-object 2013 faces the user, the second virtual sub-object is pulled out by a gesture, so that a call request message can be sent to the second contact associated with the first surface of the second virtual sub-object, and a call connection with the second contact is established, that is, at this time, the user establishes a multi-party call connection with the first contact and the second contact.
Optionally, the establishing the multi-party call connection with the first contact and the second contact may include, but is not limited to: establishing communication connection among the user, the first contact, the second contact and the third contact; or the user establishes a call connection with the first contact person, and the user establishes a call connection with the second contact person.
Illustratively, in the case that the first faces of the first virtual sub-object and the second virtual sub-object face the user, the user pulls out the first virtual sub-object and the second virtual sub-object simultaneously through gestures, and then sends call request messages to the first contact and the second contact simultaneously, and establishes a multi-party call connection with the first contact and the second contact.
Optionally, after the user establishes a one-way call connection with the first contact, the second virtual sub-object is rotated so that the second surface of the second virtual sub-object faces the user, and the user pulls out the second virtual sub-object by a gesture, so that the call request message can be sent to a third contact associated with the second surface of the second virtual sub-object, and a call connection with the third contact is established, that is, at this time, the user establishes a multi-party call connection with the first contact and the third contact.
In the embodiment of the invention, a user can establish multi-party call connection with a plurality of contacts through some simple gestures.
Optionally, after step 1032, the method further includes:
and step 109, receiving a fourth input of the user to the first surface of the first virtual sub-object.
Optionally, the fourth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fourth input may also be a fourth operation. When the fourth input is executed, the fourth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
And step 110, responding to the fourth input, ending the call with the first contact person, and keeping the call with the second contact person.
Illustratively, receiving a fourth input from the user to the first side of the first virtual sub-object, such as the user pushing the first virtual sub-object by a gesture, ends the call with the first contact and maintains the call with the second contact.
Illustratively, the user pushes both the first virtual sub-object and the second virtual sub-object by a gesture, and then the call with the first contact and the second contact is ended.
In the embodiment of the invention, the user can end the call with part or all of the contacts in the multi-party call through some simple gestures.
Optionally, the head mounted device comprises a camera;
before step 101, the method further comprises:
and step 1001, acquiring an image acquired by a camera.
The camera acquires images in a real environment, and the real environment is the real environment within the visual angle range of the user.
Step 1002, in a case that the image includes the target object, displaying a virtual object in a first area of a virtual screen, where the first area is an area corresponding to an area where the target object is located.
Optionally, in a case that the image acquired by the camera in real time does not include the target real object, that is, the user's sight line is away from the target real object, the display of the virtual object is cancelled on the virtual screen, and when the user's sight line returns to the target real object again, the virtual object is displayed on the virtual screen.
Optionally, the first area is the same as the area where the target object is located, or the first area is a part of the area where the target object is located, or the first area includes the area where the target object is located, or the first area is adjacent to the area where the target object is located, for example, the first area is located in front of, above, or the like the area where the target object is located.
Optionally, the case that the image includes the target real object includes: the target object appears in the image or the image comprises the target object, and the environment around the target object is the target environment. For example, the target object is a sofa, the target environment is that a tea table is arranged 0.5 m in front of the sofa, a television is arranged 1 m in front of the tea table, and a water dispenser is arranged 0.3 m in the left side of the sofa.
In the case that the image in the real environment captured by the camera includes the target real object, the virtual object is displayed in the first area of the virtual screen, and illustratively, the image in the real environment captured by the camera includes a table, and then the virtual object is displayed in the first area of the virtual screen, where the first area is located on the upper surface of the table, or the first area is located directly above the upper surface of the table.
In the embodiment of the invention, the virtual object is displayed in the first area of the virtual screen by acquiring the image acquired by the camera under the condition that the image comprises the target object, so that the virtual object can be displayed when the visual angle of a user returns to the target area.
Optionally, the second area of the virtual screen comprises a target identification.
Optionally, a target identification is used to indicate the virtual object.
Before step 1001, the method further includes:
and 1003, receiving a fifth input of the user to the target identification and the target space region.
Optionally, the fifth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The fifth input may also be a fifth operation. When the fifth input is executed, the fifth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Step 1004, responding to the fifth input, and displaying a virtual object in a third area corresponding to the target space area on the virtual screen, wherein the target space area is an area where the target real object is located.
Optionally, the third area is the same as the target spatial area, or the third area is a part of the target spatial area, or the third area includes the target spatial area, or the third area is adjacent to the target spatial area, for example, the third area is located in front of, above, or the like the target spatial area.
Illustratively, as shown in fig. 5(a), the target identifier 501 is located in the second area 502 of the virtual screen, and a fifth input of the user to the target identifier 501 and the target space area 503 is received, for example, the target identifier is dragged to the target space area, the target object is a wall, and as shown in fig. 5(b), the virtual object 201 is displayed in the third area, which is a part of the target space area 503, and the user may continue to resize the virtual object by using a finger.
The head-mounted device stores information of a virtual object with a set area and an adjusted size, for example, space coordinates and surrounding environment information of the virtual object are stored, the virtual object is on one wall, the right half of the wall comprises a door, the left side of the wall is vertically connected with the other wall comprising a window, image information of the surrounding environment of the target object can also be stored, when the visual angle of a user returns to the area where the target object is located, the virtual object is displayed, further illustratively, when the visual angle of the user falls in the target space area, the camera collects images in the real environment, the collected images are compared with the previously stored images of the surrounding environment of the target object, and the virtual object is displayed in a third area under the condition that the position information and the image information of the target object and the surrounding environment are matched.
Optionally, the target identifier is always displayed on a virtual screen of the head-mounted device, that is, the user may see the target identifier at any time, the user may drag the target identifier to any one or more spatial regions, the head-mounted device may record spatial coordinates of the virtual object, and the user may see the virtual object in the plurality of spatial regions.
In the embodiment of the invention, by receiving a fifth input of the user to the target identifier and the target space region and responding to the fifth input, displaying the virtual object in a third region corresponding to the target space region on the virtual screen, the virtual object can be placed in a plurality of space regions by a simple gesture of the user, and the user can see the virtual object in the plurality of space regions.
Optionally, after step 103, further comprising:
and step 111, receiving a sixth input of the user to the first surface of the first virtual sub-object and the first surface of the third virtual sub-object.
Optionally, the sixth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The sixth input may also be a sixth operation. When the sixth input is executed, the sixth input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, the user's finger points to the first side of the first virtual sub-object and then to the first side of the third virtual sub-object.
Illustratively, with the first faces of the first virtual sub-object and the third virtual sub-object facing the user, the user makes a gesture to pull the first virtual sub-object, such as pointing the user's finger at the first virtual sub-object and moving the finger backward, which is pointing away from the first virtual sub-object, the first virtual sub-object is pulled out and moves to the second virtual sub-object following the user's finger.
Step 112, responding to the sixth input, and establishing a call connection between the first contact and a fourth contact;
wherein the first face of the third virtual sub-object is associated with the fourth contact.
Illustratively, as shown in (a) of fig. 6, the virtual screen further includes a target virtual object 801, in a case where the first face 20111 of the first virtual sub-object 2011 faces the user, the user pulls and places the first virtual sub-object within the target virtual object 801 by a gesture, in a case where the first face of the third virtual sub-object 802 faces the user, as shown in (b) of fig. 6, the user makes a gesture of grabbing the first virtual sub-object, and the first virtual sub-object moves to be within the third virtual sub-object 802 following the hand of the user, a call connection is established between the first contact and the fourth contact.
Optionally, during a call with a first contact, receiving a sixth input of the user to the first side of the first virtual sub-object and the first side of the third virtual sub-object, and in response to the sixth input, establishing a call connection between the first contact and a fourth contact; wherein the first face of the third virtual sub-object is associated with the fourth contact.
Optionally, after establishing a call connection between the first contact and a fourth contact, the user ends the call with the first contact.
Optionally, during a call with a first contact, receiving a sixth input of the user to the first surface of the first virtual sub-object and the first surface of the third virtual sub-object, and in response to the sixth input, establishing a call connection among the user, the first contact, and a fourth contact; wherein the first face of the third virtual sub-object is associated with the fourth contact.
In the embodiment of the invention, in the process of communicating with the first contact, a user can establish communication connection between the first contact and the second contact through some simple gestures to realize the switching function.
Optionally, after step 103, further comprising:
and 113, receiving a seventh input of the user to the first surface of the first virtual sub-object.
Optionally, the seventh input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The seventh input may also be a seventh operation. When the seventh input is executed, the input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
Illustratively, during a conversation with a first contact, as shown in fig. 7(a), with the first face 20111 of the first virtual sub-object 2011 facing the user, the user continues to pull the first virtual sub-object out of the target distance by a gesture, such as pulling the first virtual sub-object out until the first virtual sub-object occupies a majority of the user's field of view, as shown in fig. 7 (b).
Step 114, responding to the seventh input, displaying target information on the virtual screen;
the target information is display content of a virtual screen of the head-mounted device worn by the first contact person.
Optionally, the head mounted device is a first head mounted device, and the head mounted device worn by the first contact is a second head mounted device.
Illustratively, in response to the seventh input, display content of a virtual screen of the second head mounted device of the first contact is displayed on the virtual screen and updated in real-time.
In the embodiment of the invention, in the process of communicating with the first contact, the user can switch the display content of the head-mounted device to the display content of the head-mounted device of the first contact through simple gesture operation, and update the display content in real time, so that the visual angle of the head-mounted device can be shared, and communication is facilitated.
Optionally, in a case where a call request of the first contact is received, an eighth input of the user to the first side of the first virtual sub-object is received, and in response to the eighth input, a pass-through connection is established with the first contact.
Optionally, the eighth input includes, but is not limited to, at least one of a slide input, a click input, a drag input, a long-press input, a hover touch input, a voice input, and the like, which is specifically set according to an actual need, and the embodiment of the present invention is not limited. The eighth input may also be an eighth operation. When the eighth input is executed, the input may be a single-point input, such as a sliding input using a single finger, a click input, or the like; or multi-point input, such as sliding input and clicking input by using two fingers simultaneously.
For example, in a case that the first side of the first virtual sub-object faces the user, the user makes a gesture of pulling out the first virtual sub-object, that is, a pass-through connection can be established with the first contact.
Optionally, in a case where a call request of the first contact is received, the display state of the first virtual sub-object is updated, such as the first virtual sub-object flickers or shakes.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid conversation with a target contact, so that the operation is simple and convenient.
As shown in fig. 8, an embodiment of the present invention provides a head-mounted device 600, where the head-mounted device 600 includes:
a first receiving module 601, configured to receive a first input of a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen; a first sending module 602, configured to send, in response to the first input, a call request message to a first contact associated with a first side of the first virtual sub-object; a first call module 603, configured to establish a call connection with a target contact, where the target contact includes the first contact; the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
Optionally, the first call module includes: and the first communication unit is used for establishing unilateral communication connection with the first contact person.
Optionally, the first side of the first virtual sub-object is oriented in the same direction as the virtual screen; the head-mounted device further comprises: the second receiving module is used for receiving second input of the user to the first virtual sub-object; a rotation module for rotating the M virtual sub-objects such that second faces of the M virtual sub-objects face the user in response to the second input; wherein M is a positive integer and M is less than or equal to N.
Optionally, the head-mounted device further comprises: a first display module for displaying target information on the second side of the M virtual sub-objects, the target information including information of M contacts associated with the second side of the M virtual sub-objects.
Optionally, the head-mounted device further comprises: the third receiving module is used for receiving a third input of the user to the target surface of the second virtual sub-object; a second sending module, configured to send, in response to the third input, a call request message to a second contact associated with the target surface of the second virtual sub-object; the first call module includes: and the second call ticket unit is used for establishing the multi-party call connection with the first contact person and the second contact person.
Optionally, the head-mounted device further comprises: a fourth receiving module, configured to receive a fourth input of the user on the first surface of the first virtual sub-object; and the second communication module is used for responding to the fourth input, ending the communication with the first contact person and keeping the communication with the second contact person.
Optionally, the head mounted device comprises a camera; the head-mounted device further comprises: the first acquisition module is used for acquiring an image acquired by the camera; and the second display module is used for displaying a virtual object in a first area of a virtual screen under the condition that the image comprises the target object, wherein the first area is an area corresponding to the area where the target object is located.
Optionally, the second area of the virtual screen comprises a target identification; the head-mounted device further comprises: the fifth receiving module is used for receiving fifth input of the target identification and the target space region from the user; and the third display module is used for responding to the fifth input, displaying a virtual object in a third area corresponding to the target space area on the virtual screen, wherein the target space area is an area where the target object is located.
Optionally, the N virtual sub-objects are separated by a separation identifier.
Optionally, the head-mounted device further comprises: a sixth receiving module, configured to receive a sixth input from the user to the first surface of the first virtual sub-object and the first surface of the third virtual sub-object; the third communication module is used for responding to the sixth input and establishing communication connection between the first contact and a fourth contact; wherein the first face of the third virtual sub-object is associated with the fourth contact.
Optionally, the head-mounted device further comprises: a seventh receiving module, configured to receive a seventh input of the user to the first surface of the first virtual sub-object; a fourth display module for displaying target information on the virtual screen in response to the seventh input; the target information is display content of a virtual screen of the head-mounted device worn by the first contact person.
The head-mounted device provided by the embodiment of the present invention can implement each process implemented by the head-mounted device in the above method embodiments, and is not described herein again to avoid repetition.
In an embodiment of the present invention, a head-mounted device receives a first input from a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid conversation with a target contact, so that the operation is simple and convenient.
Fig. 9 is a schematic diagram of a hardware structure of a head-mounted device for implementing various embodiments of the present invention, and as shown in fig. 9, the head-mounted device 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the configuration of the head-mounted device shown in fig. 9 does not constitute a limitation of the head-mounted device, and that the head-mounted device may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In embodiments of the present invention, the head-mounted device includes, but is not limited to, VR glasses, AR glasses, MR glasses, or VR helmets, AR helmets, MR helmets, and the like.
Wherein the user input unit 707 is configured to receive a first input by a user to a first side of a first virtual sub-object of a virtual object displayed on the virtual screen; a processor 710 configured to send a call request message to a first contact associated with a first side of the first virtual sub-object in response to the first input; establishing a call connection with a target contact person, wherein the target contact person comprises the first contact person; the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
The embodiment of the invention provides a head-mounted device, which can receive a first input of a user to a first surface of a first virtual sub-object of a virtual object displayed on a virtual screen; in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object; establishing a call connection with a target contact, wherein the target contact comprises the first contact; the virtual object is a three-dimensional virtual object, the virtual object comprises N virtual sub-objects, the N virtual sub-objects comprise the first virtual sub-object, the virtual sub-objects comprise at least one surface, different surfaces are associated with different contacts, and N positive integers can realize rapid conversation with a target contact, so that the operation is simple and convenient.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The head-mounted device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the head-mounted device 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The head-mounted device 700 also includes at least one sensor 705, such as a gesture sensor, a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the head-mounted device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of a head-mounted device (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), and identify related functions of vibration (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The display unit 706 may include a hologram device, which may form a three-dimensional (3D) image (hologram) in the air by using light interference, a projector (not shown in the drawings). The projector may display an image by projecting light onto a screen. The screen may be located inside or outside the head-mounted device.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the head-mounted device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 9 as two separate components to implement the input and output functions of the head-mounted device, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the head-mounted device, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the head-mounted apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the headset 700 or may be used to transmit data between the headset 700 and an external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the head-mounted device, connects various parts of the whole head-mounted device by using various interfaces and lines, and performs various functions of the head-mounted device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby performing overall monitoring of the head-mounted device. Processor 710 may include one or more processing units; alternatively, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710, and that the processor 710 may detect a user's gesture and determine a control command corresponding to the gesture in accordance with embodiments of the present invention.
The head-mounted device 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and optionally, the power supply 711 may be logically coupled to the processor 710 via a power management system to implement functions such as managing charging, discharging, and power consumption via the power management system.
In addition, the head-mounted device 700 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides a head-mounted device, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program, when executed by the processor 710, implements each process of the above-described communication method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
Optionally, in this embodiment of the present invention, the head-mounted device in the above embodiment may be an AR device. Specifically, when the head-mounted device in the above embodiment is an AR device, the AR device may include all or part of the functional modules in the head-mounted device. Of course, the AR device may also include functional modules not included in the head mounted device described above.
It is to be understood that, in the embodiment of the present invention, when the head-mounted device in the above-described embodiment is an AR device, the head-mounted device may be a head-mounted device integrated with AR technology. The AR technology is a technology for realizing the combination of a real scene and a virtual scene. By adopting the AR technology, the visual function of human can be restored, so that human can experience the feeling of combining a real scene and a virtual scene through the AR technology, and further the human can experience the experience of being personally on the scene better.
Taking the AR device as AR glasses as an example, when the user wears the AR glasses, the scene viewed by the user is generated by processing through the AR technology, that is, the virtual scene can be displayed in the real scene in an overlapping manner through the AR technology. When the user operates the content displayed by the AR glasses, the user can see that the AR glasses peel off the real scene, so that a more real side is displayed to the user. For example, only the case of the carton can be observed when a user visually observes one carton, but the user can directly observe the internal structure of the carton through AR glasses when the user wears the AR glasses.
The AR equipment can comprise the camera, so that the AR equipment can be combined with the virtual picture to display and interact on the basis of the picture shot by the camera. For example, in the embodiment of the present invention, the AR device may synchronize the virtual screen information generated when the user uses the AR device to perform an entertainment activity to the display screens of other AR devices, so that virtual screen sharing can be implemented between the AR devices.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned communication method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a head-mounted device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (24)

1. A conversation method is applied to a head-mounted device and is characterized by comprising the following steps:
receiving a first input of a user to a first side of a first virtual sub-object of a virtual object displayed on a virtual screen;
in response to the first input, sending a call request message to a first contact associated with a first face of the first virtual sub-object;
establishing a call connection with a target contact, wherein the target contact comprises the first contact;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
2. The method of claim 1, wherein establishing the call connection with the target contact comprises:
and establishing unilateral call connection with the first contact person.
3. The method of claim 1, wherein the first side of the first virtual sub-object is oriented in the same direction as the virtual screen;
the method further comprises the following steps:
receiving a second input of the first virtual sub-object from the user;
in response to the second input, rotating M virtual sub-objects such that second faces of the M virtual sub-objects face the user;
wherein M is a positive integer and M is less than or equal to N.
4. The method of claim 3, wherein after said rotating the M virtual sub-objects, further comprising:
displaying, on a second side of the M virtual sub-objects, destination information including information for M contacts associated with the second side of the M virtual sub-objects.
5. The method of claim 1, further comprising:
receiving a third input of the user to the target surface of the second virtual sub-object;
in response to the third input, sending a call request message to a second contact associated with a target surface of the second virtual sub-object;
the establishing of the call connection with the target contact person comprises the following steps:
establishing a multi-party call connection with the first contact and the second contact.
6. The method of claim 5, wherein after establishing the multi-party call connection with the first contact and the second contact, further comprising:
receiving a fourth input of the user to the first surface of the first virtual sub-object;
and responding to the fourth input, ending the call with the first contact and keeping the call with the second contact.
7. The method of claim 1, wherein the head-mounted device comprises a camera;
before the receiving of the first input of the user to the first side of the first virtual sub-object of the virtual object displayed on the virtual screen, the method further includes:
acquiring an image acquired by a camera;
and under the condition that the image comprises the target object, displaying a virtual object in a first area of a virtual screen, wherein the first area is an area corresponding to the area where the target object is located.
8. The method of claim 7, wherein the second area of the virtual screen includes a target identification;
before the image of acquireing the camera collection, still include:
receiving a fifth input of the target identification and the target space region from the user;
responding to the fifth input, and displaying a virtual object in a third area corresponding to the target space area on the virtual screen, wherein the target space area is an area where a target object is located.
9. The method of claim 1, wherein the N virtual sub-objects are separated by a separation identifier.
10. The method of claim 1, wherein after establishing the call connection with the target contact, further comprising:
receiving a sixth input from the user to the first face of the first virtual sub-object and the first face of the third virtual sub-object;
establishing a call connection between the first contact and a fourth contact in response to the sixth input;
wherein the first face of the third virtual sub-object is associated with the fourth contact.
11. The method of claim 1, wherein after establishing the call connection with the target contact, further comprising:
receiving a seventh input of the user to the first side of the first virtual sub-object;
displaying target information on the virtual screen in response to the seventh input;
the target information is display content of a virtual screen of the head-mounted device worn by the first contact person.
12. A head-mounted device, comprising:
the first receiving module is used for receiving a first input of a user to a first surface of a first virtual sub-object of a virtual object displayed on a virtual screen;
a first sending module, configured to send, in response to the first input, a call request message to a first contact associated with a first side of the first virtual sub-object;
the first call module is used for establishing call connection with a target contact person, and the target contact person comprises the first contact person;
the virtual object is a three-dimensional virtual object, the virtual object includes N virtual sub-objects, the N virtual sub-objects include the first virtual sub-object, the virtual sub-object includes at least one surface, different surfaces are associated with different contacts, and N is a positive integer.
13. The headset of claim 12, wherein the first telephony module comprises:
and the first communication unit is used for establishing unilateral communication connection with the first contact person.
14. The head-mounted device of claim 12, wherein the first face of the first virtual sub-object is oriented the same as the virtual screen;
the head-mounted device further comprises:
the second receiving module is used for receiving second input of the user to the first virtual sub-object;
a rotation module for rotating the M virtual sub-objects such that second faces of the M virtual sub-objects face the user in response to the second input;
wherein M is a positive integer and M is less than or equal to N.
15. The head-mounted apparatus of claim 14, further comprising:
a first display module for displaying target information on the second side of the M virtual sub-objects, the target information including information of M contacts associated with the second side of the M virtual sub-objects.
16. The head-mounted apparatus of claim 12, further comprising:
the third receiving module is used for receiving a third input of the user to the target surface of the second virtual sub-object;
a second sending module, configured to send, in response to the third input, a call request message to a second contact associated with the target surface of the second virtual sub-object;
the first call module includes:
and the second call ticket unit is used for establishing the multi-party call connection with the first contact person and the second contact person.
17. The head-mounted apparatus of claim 16, further comprising:
a fourth receiving module, configured to receive a fourth input of the user on the first surface of the first virtual sub-object;
and the second communication module is used for responding to the fourth input, ending the communication with the first contact person and keeping the communication with the second contact person.
18. The head-mounted apparatus of claim 12, wherein the head-mounted apparatus comprises a camera;
the head-mounted device further comprises:
the first acquisition module is used for acquiring an image acquired by the camera;
and the second display module is used for displaying a virtual object in a first area of a virtual screen under the condition that the image comprises the target object, wherein the first area is an area corresponding to the area where the target object is located.
19. The head-mounted device of claim 18, wherein the second region of the virtual screen includes a target identification;
the head-mounted device further comprises:
the fifth receiving module is used for receiving fifth input of the target identification and the target space region from the user;
and the third display module is used for responding to the fifth input, displaying a virtual object in a third area corresponding to the target space area on the virtual screen, wherein the target space area is an area where the target object is located.
20. The head-mounted device of claim 12, wherein the N virtual sub-objects are separated by a separation indicator.
21. The head-mounted apparatus of claim 12, further comprising:
a sixth receiving module, configured to receive a sixth input from the user to the first surface of the first virtual sub-object and the first surface of the third virtual sub-object;
the third communication module is used for responding to the sixth input and establishing communication connection between the first contact and a fourth contact;
wherein the first face of the third virtual sub-object is associated with the fourth contact.
22. The head-mounted apparatus of claim 12, further comprising:
a seventh receiving module, configured to receive a seventh input of the user to the first surface of the first virtual sub-object;
a fourth display module for displaying target information on the virtual screen in response to the seventh input;
the target information is display content of a virtual screen of the head-mounted device worn by the first contact person.
23. A head-mounted device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the telephony method of any one of claims 1 to 11.
24. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the telephony method according to one of claims 1 to 11.
CN202010031847.1A 2020-01-13 2020-01-13 Communication method, head-mounted device, and medium Active CN111246014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031847.1A CN111246014B (en) 2020-01-13 2020-01-13 Communication method, head-mounted device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031847.1A CN111246014B (en) 2020-01-13 2020-01-13 Communication method, head-mounted device, and medium

Publications (2)

Publication Number Publication Date
CN111246014A CN111246014A (en) 2020-06-05
CN111246014B true CN111246014B (en) 2021-04-06

Family

ID=70876179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031847.1A Active CN111246014B (en) 2020-01-13 2020-01-13 Communication method, head-mounted device, and medium

Country Status (1)

Country Link
CN (1) CN111246014B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892667A (en) * 2016-03-31 2016-08-24 联想(北京)有限公司 Information processing method in virtual reality scene and electronic equipment
CN105974808A (en) * 2016-06-30 2016-09-28 宇龙计算机通信科技(深圳)有限公司 Control method and control device based on virtual reality equipment and virtual reality equipment
JP2017184048A (en) * 2016-03-30 2017-10-05 株式会社バンダイナムコエンターテインメント Program and virtual reality experience provision device
US9836889B2 (en) * 2012-01-27 2017-12-05 Microsoft Technology Licensing, Llc Executable virtual objects associated with real objects

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2145465A2 (en) * 2007-04-14 2010-01-20 Musecom Ltd. Virtual reality-based teleconferencing
WO2013018099A2 (en) * 2011-08-04 2013-02-07 Eyesight Mobile Technologies Ltd. System and method for interfacing with a device via a 3d display
US9310611B2 (en) * 2012-09-18 2016-04-12 Qualcomm Incorporated Methods and systems for making the use of head-mounted displays less obvious to non-users
WO2017070121A1 (en) * 2015-10-20 2017-04-27 Magic Leap, Inc. Selecting virtual objects in a three-dimensional space
US9584653B1 (en) * 2016-04-10 2017-02-28 Philip Scott Lyren Smartphone with user interface to externally localize telephone calls
CN106648075B (en) * 2016-11-29 2020-07-03 维沃移动通信有限公司 Control method of virtual reality equipment and virtual reality equipment
WO2018120127A1 (en) * 2016-12-30 2018-07-05 深圳市柔宇科技有限公司 Virtual reality device and incoming call management method therefor
CN107390875B (en) * 2017-07-28 2020-01-31 腾讯科技(上海)有限公司 Information processing method, device, terminal equipment and computer readable storage medium
US20190096130A1 (en) * 2017-09-26 2019-03-28 Akn Korea Inc. Virtual mobile terminal implementing system in mixed reality and control method thereof
CN108304075B (en) * 2018-02-11 2021-08-06 亮风台(上海)信息科技有限公司 Method and device for performing man-machine interaction on augmented reality device
CN109120800A (en) * 2018-10-18 2019-01-01 维沃移动通信有限公司 A kind of application icon method of adjustment and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9836889B2 (en) * 2012-01-27 2017-12-05 Microsoft Technology Licensing, Llc Executable virtual objects associated with real objects
JP2017184048A (en) * 2016-03-30 2017-10-05 株式会社バンダイナムコエンターテインメント Program and virtual reality experience provision device
CN105892667A (en) * 2016-03-31 2016-08-24 联想(北京)有限公司 Information processing method in virtual reality scene and electronic equipment
CN105974808A (en) * 2016-06-30 2016-09-28 宇龙计算机通信科技(深圳)有限公司 Control method and control device based on virtual reality equipment and virtual reality equipment

Also Published As

Publication number Publication date
CN111246014A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN108499105B (en) Method, device and storage medium for adjusting visual angle in virtual environment
US10356398B2 (en) Method for capturing virtual space and electronic device using the same
CN111258420B (en) Information interaction method, head-mounted device and medium
CN102779000B (en) User interaction system and method
CN109218648B (en) Display control method and terminal equipment
EP3676745B1 (en) Privacy screen for computer simulated reality
KR102099834B1 (en) Electric device and operation method thereof
WO2021136266A1 (en) Virtual image synchronization method and wearable device
CN112817453A (en) Virtual reality equipment and sight following method of object in virtual reality scene
CN110968190B (en) IMU for touch detection
WO2021136329A1 (en) Video editing method and head-mounted device
CN110233929A (en) A kind of display control method and terminal device
CN111352505B (en) Operation control method, head-mounted device, and medium
CN111240483B (en) Operation control method, head-mounted device, and medium
CN111093033B (en) Information processing method and device
CN111178306B (en) Display control method and electronic equipment
CN111142772A (en) Content display method and wearable device
CN111258482A (en) Information sharing method, head-mounted device, and medium
CN109547696B (en) Shooting method and terminal equipment
CN111246014B (en) Communication method, head-mounted device, and medium
CN111338521A (en) Icon display control method and electronic equipment
CN110717993A (en) Interaction method, system and medium of split type AR glasses system
CN111064658B (en) Display control method and electronic equipment
CN111104656A (en) Unlocking method and electronic equipment
CN111143799A (en) Unlocking method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant