CN115348240B - Voice call method, device, electronic equipment and storage medium for sharing document - Google Patents

Voice call method, device, electronic equipment and storage medium for sharing document Download PDF

Info

Publication number
CN115348240B
CN115348240B CN202210976170.8A CN202210976170A CN115348240B CN 115348240 B CN115348240 B CN 115348240B CN 202210976170 A CN202210976170 A CN 202210976170A CN 115348240 B CN115348240 B CN 115348240B
Authority
CN
China
Prior art keywords
voice call
document
shared document
option
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210976170.8A
Other languages
Chinese (zh)
Other versions
CN115348240A (en
Inventor
马秋晨
付硕
朱龙
陈加新
赵伊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210976170.8A priority Critical patent/CN115348240B/en
Publication of CN115348240A publication Critical patent/CN115348240A/en
Application granted granted Critical
Publication of CN115348240B publication Critical patent/CN115348240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/401Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
    • H04L65/4015Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Abstract

The disclosure relates to a voice call method, a voice call device, electronic equipment and a storage medium for sharing documents, and belongs to the technical field of networks. In the embodiment of the disclosure, a call initiation option can be displayed on a shared document, so that a voice call request is initiated to at least one object using the shared document in response to a triggering operation of the call initiation option, so as to perform voice call based on the shared document; accordingly, the method and the device can display the voice call options for joining the voice call based on the shared document in which the voice call is being performed, so that the object can conveniently join the voice call of the shared document. Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document when carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.

Description

Voice call method, device, electronic equipment and storage medium for sharing document
Technical Field
The disclosure relates to the field of network technologies, and in particular, to a method and a device for voice call of a shared document, an electronic device and a storage medium.
Background
With the development of network technology, multiple users can communicate in the form of initiating an audio-video conference, and in the process of the audio-video conference, the users can also initiate sharing of documents, so that other users participating in the audio-video conference can see the shared documents, and the corresponding conference content can be known. However, when sharing a document in such an audio-video conference, other participants can only see the document part which the initiating user wants to display, and cannot perform effective man-machine interaction, so as to achieve the purposes of browsing the document and the like.
The above-mentioned document sharing method based on the audio-video conference has low man-machine interaction efficiency, so a method with high man-machine interaction efficiency is needed to realize the audio-video conference.
Disclosure of Invention
The invention provides a voice call method, a voice call device, electronic equipment and a storage medium for sharing a document, so that a plurality of objects of the shared document can perform voice call based on the shared document, and the man-machine interaction efficiency is effectively improved. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a voice call method of sharing a document, the method including:
Displaying a call initiation option on a first shared document, the first shared document being used to provide document services for a plurality of objects;
responding to the triggering operation of the call initiation option, and initiating a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
and carrying out a first voice call under the condition that any one of the target objects accepts the voice call request.
In one possible implementation, the responding to the triggering operation of the call initiation option initiates a voice call request based on at least one target object of the first shared document, including:
responding to the triggering operation of the call initiation option, and initiating a voice call request to all target objects of the first shared document;
or alternatively, the first and second heat exchangers may be,
and in response to the triggering operation of the call initiation option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and in response to the selection operation of part of target objects in all target objects, initiating the voice call request to the part of target objects.
In one possible embodiment, the method further comprises:
And displaying a voice call toolbar at a target position of the first shared document in the process of the first voice call, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
In one possible embodiment, the method further comprises:
and displaying the icon of the object speaking in the first voice call at the appointed position of the voice call toolbar.
In one possible implementation, the voice call toolbar includes an object display option, and the method further includes:
and displaying object icons of a plurality of participating objects of the first voice call in response to a triggering operation of the object display option.
In one possible implementation, the voice call toolbar includes an invite option, the method further comprising:
responding to triggering operation of the invitation option, and displaying address information of the first shared document and authority setting options of the object to be invited for the first shared document;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
In one possible implementation, the voice call toolbar includes a microphone state setting option, and the method further includes:
in response to a trigger operation of the microphone state setting option, the microphone of the local device is set to a corresponding state.
In one possible implementation, the voice call toolbar includes an audio device setting option, the method further comprising:
and setting the audio equipment adopted by the first voice call on the local equipment in response to the setting operation of the audio equipment setting option.
In one possible implementation, the voice call toolbar includes a call end option, and the method further includes:
and responding to the triggering operation of the call ending option, and ending the first voice call.
In one possible embodiment, the method further comprises:
and displaying view angle following information of at least one participant object of the first voice call in the first shared document, wherein the view angle following information is used for indicating whether the participant object follows a document browsing view angle of an initiating object of the first voice call.
In one possible embodiment, the method further comprises:
in the first shared document, displaying view angle following information and following control options of at least one participant object of the first voice call, wherein the following control options are used for setting the following state of the participant object;
If the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the participating object is controlled to follow the document browsing view angle of the initiating object in response to the triggering operation of the following control option;
and if the view angle following information of the participating object is in a following state, the following control option is displayed as an exiting function, and the participating object is controlled to exit the following of the document browsing view angle of the initiating object in response to the triggering operation of the following control option.
In one possible embodiment, the method further comprises:
and displaying a view angle border in the first shared document based on the document browsing view angle of the initiating object of the first voice call, wherein the view angle border is used for indicating the document area browsed by the initiating object.
In one possible embodiment, the method further comprises:
and displaying a cursor of an initiating object of the first voice call in the first shared document, and displaying a cursor of a following object in the participating objects of the first voice call.
In one possible implementation manner, the object icons of the initiation object and the participation object of the first voice call are displayed in different manners.
According to a second aspect of the embodiments of the present disclosure, there is provided a voice call method of sharing a document, the method including:
based on the second shared document, displaying a voice call option of the second shared document, the voice call option indicating that a plurality of objects of the second shared document are engaged in a second voice call;
and responding to the triggering operation of the voice call option, and joining the second voice call.
In one possible implementation, the displaying, based on the second shared document, a voice call option of the second shared document includes:
and displaying the voice call option of the second shared document on the second shared document.
In one possible implementation, the displaying, based on the second shared document, a voice call option of the second shared document includes:
displaying a voice call identifier on a document label of the second shared document in the shared document list, wherein the voice call identifier indicates that a plurality of objects of the second shared document are in a second voice call;
and displaying the voice call option of the second shared document based on the triggering operation of the document tag.
In one possible implementation, the displaying the voice call option of the second shared document based on the triggering operation on the document tag includes:
Responding to the triggering operation of the document label, displaying a functional interface of the second shared document, wherein the functional interface comprises a voice call icon and a skip icon, the voice call icon is used for providing the voice call option, and the skip icon is used for skipping to the second shared document;
responding to the triggering operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the responding to the triggering operation of the voice call option, adding the second voice call, comprising:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
In one possible embodiment, the method further comprises:
and displaying a voice call toolbar at a target position of the second shared document in the process of carrying out the second voice call, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
In one possible implementation, the voice call toolbar includes a call end option, and the method further includes:
and responding to the triggering operation of the call ending option, and exiting the second voice call.
In one possible implementation, after the joining the second voice call in response to the triggering operation of the voice call option, the method further includes:
And displaying the second shared document based on the document browsing view of the initiating object of the second voice call.
In one possible embodiment, the method further comprises:
in the second shared document, displaying view angle following information and following control options of the participating object of the local terminal, wherein the following control options are used for setting the following state of the participating object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the document browsing view angle of the initiating object is followed in response to the triggering operation of the following control option;
and if the view angle following information of the participating object is in a following state, displaying the following control option as an exiting function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view angle of the initiating object.
In one possible implementation manner, the object icons of the initiation object and the participation object of the second voice call are displayed in different manners.
According to a third aspect of embodiments of the present disclosure, there is provided a voice call apparatus for sharing a document, the apparatus including:
a display unit configured to display a call initiation option on a first shared document for providing a document service for a plurality of objects;
An initiating unit configured to perform a trigger operation in response to the call initiation option, initiate a voice call request based on at least one target object of the first shared document, the target object being an object that is using the first shared document;
and a call unit configured to perform a first voice call in a case where any one of the target objects accepts the voice call request.
In a possible implementation, the initiating unit is configured to perform:
responding to the triggering operation of the call initiation option, and initiating a voice call request to all target objects of the first shared document;
or alternatively, the first and second heat exchangers may be,
and in response to the triggering operation of the call initiation option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and in response to the selection operation of part of target objects in all target objects, initiating the voice call request to the part of target objects.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a tool display unit configured to display a voice call toolbar for implementing a plurality of voice call functions at a target position of the first shared document during the progress of the first voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
and an utterance display unit configured to display an icon to be uttered in the first voice call at a specified position of the voice call toolbar.
In one possible embodiment, the voice call toolbar includes an object display option, and the voice call apparatus of the shared document further includes:
and an object display unit configured to perform an object icon displaying a plurality of participation objects of the first voice call in response to a trigger operation of the object display option.
In one possible implementation, the voice call toolbar includes an invite option, and the voice call apparatus of the shared document further includes:
an invitation unit configured to perform a triggering operation in response to the invitation option, displaying address information of the first shared document and a right setting option for the first shared document for an object to be invited;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
In one possible embodiment, the voice call toolbar includes a microphone state setting option, and the voice call apparatus of the shared document further includes:
And a microphone state setting unit configured to perform a trigger operation in response to the microphone state setting option to set the microphone of the local device to a corresponding state.
In one possible implementation, the voice call toolbar includes an audio device setting option, and the voice call apparatus of the shared document further includes:
an audio device setting unit configured to perform a setting operation of setting an audio device employed by the first voice call on the local device in response to the setting operation of the audio device setting option.
In one possible implementation, the voice call toolbar includes a call end option, and the voice call device of the shared document further includes:
and an ending unit configured to perform a trigger operation in response to the call ending option to end the first voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a view angle display unit configured to perform, in the first shared document, displaying view angle following information of at least one participant object of the first voice call, the view angle following information indicating whether the participant object follows a document browsing view angle of an originating object of the first voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
a view angle control unit configured to perform, in the first shared document, displaying view angle following information of at least one participant object of the first voice call and a following control option for setting a following state of the participant object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the participating object is controlled to follow the document browsing view angle of the initiating object in response to the triggering operation of the following control option;
and if the view angle following information of the participating object is in a following state, the following control option is displayed as an exiting function, and the participating object is controlled to exit the following of the document browsing view angle of the initiating object in response to the triggering operation of the following control option.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a bezel display unit configured to perform a document browsing view based on an originating object of the first voice call, display a view bezel in the first shared document, the view bezel indicating a document area browsed by the originating object.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a cursor display unit configured to execute a cursor for displaying an originating object of the first voice call in the first shared document, and a cursor for following an object in the participating objects of the first voice call.
In one possible implementation manner, the object icons of the initiation object and the participation object of the first voice call are displayed in different manners.
According to a fourth aspect of embodiments of the present disclosure, there is provided a voice call apparatus for sharing a document, the apparatus including:
a display unit configured to execute a second voice call based on the second shared document, display a voice call option of the second shared document indicating that a plurality of objects of the second shared document are in progress for the second voice call;
and a joining call unit configured to perform joining of the second voice call in response to a trigger operation for the voice call option.
In one possible embodiment, the display unit includes:
and the first display module is configured to be executed on the second shared document and display voice call options of the second shared document.
In one possible embodiment, the display unit includes:
A second display module configured to perform displaying a voice call identifier on a document tag of the second shared document in the shared document list, the voice call identifier indicating that a plurality of objects of the second shared document are engaged in a second voice call;
and displaying the voice call option of the second shared document based on the triggering operation of the document tag.
In one possible implementation, the second display module is configured to perform:
responding to the triggering operation of the document label, displaying a functional interface of the second shared document, wherein the functional interface comprises a voice call icon and a skip icon, the voice call icon is used for providing the voice call option, and the skip icon is used for skipping to the second shared document;
responding to the triggering operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the call joining module is configured to execute:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
And a tool display unit configured to display a voice call toolbar for implementing a plurality of voice call functions at a target position of the second shared document during the progress of the second voice call.
In one possible implementation, the voice call toolbar includes a call end option, and the apparatus further includes:
and an exit unit configured to perform an exit from the second voice call in response to a trigger operation to the call end option.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a view angle display unit configured to perform a document browsing view angle based on an initiation object of the second voice call, and display the second shared document.
In one possible implementation manner, the voice call device for sharing a document further includes:
a view angle control unit configured to execute, in the second shared document, display view angle following information of a participating object of a home terminal and a following control option for setting a following state of the participating object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the document browsing view angle of the initiating object is followed in response to the triggering operation of the following control option;
And if the view angle following information of the participating object is in a following state, displaying the following control option as an exiting function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view angle of the initiating object.
In one possible implementation manner, the object icons of the initiation object and the participation object of the second voice call are displayed in different manners.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of voice call of the shared document provided in the first aspect or the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium comprising: the program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the method of voice telephony of a shared document provided in the first aspect or the second aspect described above.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more instructions executable by one or more processors of an electronic device to enable the electronic device to perform the method of voice telephony sharing a document provided in the first or second aspect above.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document under the condition of instant communication, and the man-machine interaction efficiency is effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an implementation environment of a method of voice call sharing a document, according to an example embodiment;
FIG. 2 is a flowchart illustrating a method of voice conversation of a shared document, according to an example embodiment;
FIG. 3 is a flowchart illustrating a method of voice telephony sharing a document according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method of voice telephony sharing a document according to an exemplary embodiment;
FIG. 5 is a schematic diagram of a call initiation option shown in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram of another call initiation option shown in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram of a voice call toolbar according to an exemplary embodiment;
FIG. 8 is a schematic diagram of an object display option shown in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram of an invitation option shown in accordance with an example embodiment;
FIG. 10 is a schematic diagram of a microphone setup option shown in accordance with an exemplary embodiment;
FIG. 11 is a schematic diagram illustrating an audio device setup option according to an example embodiment;
FIG. 12 is a schematic diagram illustrating an end-of-call option in accordance with an exemplary embodiment;
FIG. 13 is a schematic diagram of one view follower information and follower control option shown in accordance with an exemplary embodiment;
FIG. 14 is a schematic diagram showing a display effect of a cursor and view follower information according to an exemplary embodiment;
FIG. 15 is a schematic diagram of a shared document during a voice call, according to an example embodiment;
FIG. 16 is a schematic diagram of a shared document during another voice call, shown in accordance with an exemplary embodiment;
FIG. 17 is a schematic diagram of a voice call toolbar according to an example embodiment;
FIG. 18 is a schematic diagram of an invitation option and object display option in accordance with an example embodiment;
FIG. 19 is a schematic diagram of a microphone setup option shown in accordance with an exemplary embodiment;
FIG. 20 is a schematic diagram of an audio device setup option shown in accordance with an exemplary embodiment;
FIG. 21 is a schematic diagram illustrating an end-of-call option in accordance with an exemplary embodiment;
FIG. 22 is a schematic diagram of a shared document during a voice call, according to an example embodiment;
FIG. 23 is a flowchart illustrating a method of voice telephony sharing a document according to an exemplary embodiment;
FIG. 24 is a schematic diagram of a voice call option shown in accordance with an exemplary embodiment;
FIG. 25 is a schematic diagram of another voice call option shown in accordance with an exemplary embodiment;
FIG. 26 is a schematic diagram of another voice call option shown in accordance with an exemplary embodiment;
FIG. 27 is a block diagram of a voice call device sharing a document, shown in accordance with an exemplary embodiment;
FIG. 28 is a block diagram illustrating a voice call device sharing a document according to an example embodiment;
fig. 29 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the shared documents, group and object names, etc. referred to in this disclosure are all acquired with sufficient authorization.
Fig. 1 is a schematic diagram of an implementation environment of a method for voice communication of a shared document according to an embodiment of the present disclosure, referring to fig. 1, where the implementation environment includes: a plurality of terminals 101 and a server 102.
In the embodiment of the present disclosure, the terminal 101 can display a shared document in an interactable interface and display a call initiation option on the shared document to provide a function of performing a voice call based on the shared document. Accordingly, the terminal 101 can also display a voice call option based on the shared document in which the voice call is being performed, thereby providing a function of immediately joining the voice call. Wherein the shared document is used to provide a document service for a plurality of objects. In some embodiments, objects corresponding to the plurality of terminals 101 can use a document service provided by a shared document through a network. In some embodiments, the document services include an editing service, a browsing service, and a recording service for the shared document, which is not limited by the present disclosure. In some embodiments, an application supporting the use of a shared document, e.g., an online document application, is run in the terminal 101 through which a user can browse or edit the shared document online and collaborate with multiple objects using the shared document.
In some embodiments, the application may be a client application installed on the terminal 101, a web page (web) application accessed through a browser running on the terminal 101, or other types of applications, such as a micro-application running based on web technology within the client application, which is not limited by the present disclosure.
In some embodiments, the terminal 101 may be at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, a laptop computer, etc., where the terminal 101 has a communication function and can access the internet. The terminal 101 may refer broadly to one of a plurality of terminals, and the present embodiment is illustrated only with the terminal 101.
Wherein the server 102 is configured to provide a background service related to the shared document to the terminal 101, for example, a service for storing the shared document; consistency maintenance service for shared documents in multi-object collaboration; communication connection services for voice calls, etc.
In some embodiments, the server 102 may be a stand-alone physical server, a server cluster or a distributed file system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content distribution networks), and basic cloud computing services such as big data and artificial intelligence platforms. Of course, server 102 may also include other functional servers to provide more comprehensive and diverse services, which are not limited in this disclosure.
The server 102 and the plurality of terminals 101 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiments of the present disclosure. Alternatively, the number of the terminals 101 and the servers 102 may be greater or less, which is not limited by the embodiment of the present disclosure.
The technical scheme provided by the embodiment of the disclosure is described below based on the implementation environment.
Fig. 2 is a flowchart illustrating a method of voice call sharing a document according to an exemplary embodiment, which can be performed by a terminal in the above-described implementation environment, as shown in fig. 2, and includes the following steps 201 to 203.
In step 201, the terminal displays a call initiation option on a first shared document for providing a document service for a plurality of objects.
Wherein the first shared document is used to provide a document service for a plurality of objects. In some embodiments, the first shared document may be in a variety of document formats, such as a PDF (Portable Document Format ) document, a text document, a form document, or a presentation document, to which the present disclosure is not limited. In other embodiments, the first shared document may also be derived based on a document template, such as a meeting summary template document, which is not limited by the present disclosure.
In some embodiments, the document services may include an editing service, a browsing service, and a recording service for the shared document, which is not limited by the present disclosure. Wherein, the editing service means that the content of the shared document can be modified by using the object of the shared document; the browsing service means that the content of the shared document can be freely browsed using the object of the shared document; the recording service is to record edits occurring in a shared document.
In some embodiments, the plurality of objects are capable of online collaboration over a network using a document service provided by the first shared document, based on which the plurality of objects using the first shared document are capable of online collaboration.
Wherein the call initiation option is for initiating a voice call to the first shared document. In some embodiments, the terminal displays the first shared document in response to an open operation for the first shared document, and displays the call initiation option on the first shared document. In some embodiments, the terminal can display the call initiation option above the first shared document, and the present disclosure does not limit the location where the call initiation option is displayed.
In step 202, the terminal initiates a voice call request based on at least one target object of the first shared document, the target object being an object that is using the first shared document, in response to a trigger operation for the call initiation option.
In the embodiment of the present disclosure, the triggering operation may include various types according to the interaction form provided by the terminal. In some embodiments, the terminal is a PC-side (Personal Computer ) device such as a desktop computer or a laptop computer, and the triggering operation may be a click operation of the call initiation option through a mouse or a keyboard according to an interaction form provided by the input device of the PC-side device; in other embodiments, the terminal is a mobile terminal device such as a smart phone or a tablet computer, and the triggering operation may be a clicking operation on the call initiation option on the touchable screen according to an interaction form provided by the mobile terminal device by the touchable screen; in still other embodiments, the triggering operation may be determined by means of motion capture, voice recognition, or gaze tracking, based on more interactive forms provided by the terminal, which is not limited by the present disclosure.
In some embodiments, the case where the target object is using the first shared document may include: browsing the first shared document; editing the first shared document; the first shared document is being reviewed.
In some embodiments, the voice call request may be displayed in a popup window in the first shared document of the any target object; in other embodiments, the voice call request may be pushed to the target object in the form of a request message, which is not limited by the embodiments of the present disclosure.
In step 203, the terminal performs a first voice call when any one of the target objects accepts the voice call request.
In some embodiments, any of the target objects can communicate with the originating object of the first voice call (i.e., the object corresponding to the terminal) and other target objects that receive the voice call request in the first shared document when the target object accepts the voice call request.
Through the technical scheme, the objects can conveniently initiate the voice call based on the shared document, so that a plurality of objects can carry out the voice call based on the shared document, each object in the voice call can freely use the shared document while carrying out the voice call, frequent switching between the shared document and the voice call is not needed, and the human-computer interaction efficiency is effectively improved.
The embodiment corresponding to fig. 2 is described briefly in the process of initiating a voice call based on a shared document in the technical solution provided by the embodiment of the present disclosure, and next, the process of joining an ongoing voice call based on a shared document in the technical solution provided by the embodiment of the present disclosure is described based on the implementation environment and the embodiment corresponding to fig. 2.
Fig. 3 is a flowchart illustrating a method of voice call sharing a document according to an exemplary embodiment, which can be performed by a terminal in the above-described implementation environment, as shown in fig. 3, and includes the following steps 301 to 302.
In step 301, the terminal displays a voice call option of the second shared document based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are engaged in a second voice call.
The description of the second shared document refers to the description of the first shared document in step 201, which is not described herein.
In some embodiments, the terminal displays the second shared document in response to an open operation for the second shared document, and displays the voice call option on the second shared document. In some embodiments, the terminal can display the voice call option above the second shared document, and the present disclosure does not limit the location where the voice call option is displayed.
In some embodiments, the terminal displays the voice call option and indicates the plurality of objects in the form of icons that are engaged in the second voice call.
In step 302, the terminal joins the second voice call in response to a trigger operation for the voice call option.
The triggering operation of the voice call option refers to the description of the triggering operation in step 302, and is not described herein.
Through the technical scheme provided by the embodiment of the disclosure, the object can be conveniently added into the ongoing voice call based on the shared document, so that a plurality of objects can be subjected to the voice call based on the shared document, each object in the voice call can freely use the shared document while in the voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
The foregoing fig. 2 and 3 are merely basic flow of the disclosure, and the scheme provided in the disclosure is further described below.
First, a process of performing a voice call based on a shared document in the technical solution provided in the present disclosure will be described in detail through some embodiments. Fig. 4 is a flowchart illustrating a voice call method of sharing a document, which is performed by a terminal, as shown in fig. 4, according to an exemplary embodiment, the method including the following steps 401 to 407.
In step 401, the terminal displays a call initiation option on a first shared document for providing a document service for a plurality of objects.
This step is referred to step 201, and will not be described in detail herein.
In the embodiment of the present disclosure, the initiating object refers to an object initiating a first voice call for the first shared document through the terminal, and the participating object refers to an object other than the initiating object among a plurality of objects participating in the first voice call. In some embodiments, the initiating object may be referred to as a moderator of the first voice call and the participating object may be referred to as a participant of the first voice call.
In some embodiments, the terminal displays a number of target objects in a display area surrounding the call initiation option to indicate to an initiating object a number of objects that are currently capable of initiating a voice call, wherein the target object is an object that is using the first shared document. In order to facilitate understanding of the foregoing display manner, the present disclosure provides a schematic diagram of a call initiation option. Referring to fig. 5, in which the terminal displays the first shared document 501 (document title and document content are shown in the figure), in a button bar above the first shared document 501, a call initiation option 502 is displayed, 4 head portrait icons 503 are displayed on the right side of the call initiation option 502, the head portrait icons 503 indicate 4 target objects that are browsing the first shared document, a "6" in a digital icon 504 on the rightmost side indicates that a total of 6 objects are currently browsing the first shared document, and directory information 505 of the first shared document is also displayed on the left part of the button bar; the right portion of the button bar also provides more functional options 506 for sharing, searching, and messaging.
The display manner provided in fig. 5 can be applied to a PC-side device, and in other embodiments, for a case where the terminal is a mobile-side device, this step 401 may be implemented in a manner provided in fig. 6 below. Fig. 6 is a schematic diagram of another call initiation option provided in the present disclosure, referring to fig. 6, in which the terminal displays the first shared document 601 (the document title and the document content are shown in the figure), in the bottom expansion panel of the first shared document 601, a call initiation option 602 is displayed, and below the call initiation option 602, further functional options 603 such as editing, commenting, and copying links are provided. Wherein the bottom expansion panel pops up in response to operation of more buttons 604 in the button bar above the first shared document 601.
It should be noted that, in some embodiments, the display manner provided in fig. 5 may also be applied to a mobile terminal device, and the display manner provided in fig. 6 may also be applied to a PC terminal device, which is not limited in this disclosure.
In step 402, the terminal initiates a voice call request based on at least one target object of the first shared document, the target object being an object that is using the first shared document, in response to a trigger operation for the call initiation option.
This step refers to step 202. In some embodiments, the terminal may implement this step 402 based on the following one or two ways.
In the first mode, the terminal responds to the triggering operation of the call initiation option, and initiates a voice call request to all target objects of the first shared document.
Wherein the all target objects refer to all objects that are using the first shared document. In some embodiments, taking the display manner of the call initiation option provided in fig. 5 as an example, the terminal can respond to the triggering operation of the call initiation option 502 to initiate voice call requests indicated in fig. 5 for 6 being browsed to the first shared document.
By the method, voice communication can be conveniently initiated by one key aiming at all target objects, and a method for quickly initiating voice collaboration based on the shared document is provided.
Mode two: the terminal responds to the triggering operation of the call initiation option to display a target object list, wherein the target object list comprises all target objects of the first shared document, and responds to the selection operation of part of target objects in all target objects to initiate the voice call request to the part of target objects.
In some embodiments, the terminal can respond to the triggering operation to display all target objects in browsing the first shared document in a form of a target object list, so that the initiating object can select part of target objects according to the actual requirements of the call.
In some embodiments, the usage status of the first shared document by all target objects is displayed in the target object list, thereby providing a reference for initiating object selection of a portion of the target objects. In some embodiments, the usage status includes: browsing, editing, or commenting, which is not limiting to the present disclosure.
By the method, on the basis of providing the quick initiation of the voice call based on the shared document, the initiation selection function specific to any object is further provided, so that the technical scheme of the present disclosure can be flexibly applied to various demand scenes.
In step 403, the terminal performs a first voice call when any one of the target objects accepts the voice call request.
This step is referred to step 203 and will not be described in detail herein.
The following steps 404 to 407 are described with respect to various functions and display manners involved in the first voice call, and it should be noted that the following steps 404 to 407 are not sequentially executed.
In step 404, during the process of the first voice call, the terminal displays a voice call toolbar at the target location of the first shared document, where the voice call toolbar is used to implement a plurality of voice call functions.
In some embodiments, the target location may be a blank area in the first shared document, and may also be a blank area around the first shared document, which is not limited by the present disclosure.
In some embodiments, the terminal displays an object icon in the first voice call that is speaking at a designated location of the voice call toolbar. In some embodiments, the designated location may be above the voice call toolbar. Optionally, an avatar or name of the object is displayed in the object icon. In other embodiments, the terminal displays a talk indication in the talking object icon to indicate that talking is occurring. Based on the method, the speaking object can be indicated in real time, and the communication efficiency in the voice communication process based on the document is effectively improved.
In some embodiments, the initiating object of the first voice call and the object icon of the participating object are displayed in different manners to distinguish between the initiating object sharing the view of document browsing and the participating object following the view of document browsing of the initiating object.
In some embodiments, the voice call toolbar includes at least one of an object display option, an invite option, a microphone state setting option, an audio device setting option, and a call end option, thereby providing a plurality of voice call functions based on the plurality of function options.
The present disclosure provides a schematic diagram of a voice call toolbar, referring to fig. 7, the voice call toolbar 700 includes an object display option 701, an invite option 702, a microphone state setting option 703, an audio device setting option 704, and a call end option 705; above the voice call toolbar, an object icon 706 that is speaking is displayed, and the speaking logo 707 indicates that an object named "object AA" is speaking.
Next, the principle of implementing the voice call function by the above-mentioned multiple function options and the display manner of the multiple function options will be described, referring to the following function options 1 to 5.
Function option 1, object display option.
In some embodiments, the voice call toolbar includes an object display option, and the terminal is capable of displaying object icons of a plurality of participating objects of the first voice call in response to a trigger operation of the object display option.
In some embodiments, the triggering operation of the object display option refers to the description of the triggering operation in step 202, which is not described herein.
In some embodiments, the object icon of the participant includes information, such as the name and avatar of the participant, for identifying the identity thereof, and in other embodiments, the object icon further includes a microphone identification indicating whether the participant turns on the microphone, i.e., is in a talkable state.
For ease of understanding, the present disclosure provides a schematic diagram of an object display option, see FIG. 8, wherein the description of the voice call toolbar may refer to FIG. 7; the display element of the object display option 801 includes an avatar of the object currently speaking and the number of persons participating in the first voice call "4"; in response to a trigger operation 802 on the object display option 801, object icons 803 of the participation object a, the participation object B, and the participation object C are displayed; also displayed are microphone identifications 804 of the participant a, participant B, and participant C; wherein, the participation object A and the participation object C are in a speaking state, and the participation object B is in a non-speaking state; above the voice call toolbar, an object icon 805 of the participating object C that is speaking is displayed. It will be appreciated that this 805 is used to indicate that the participant C is speaking without triggering the object display option.
Through the technical scheme, the function of checking the information of each participant in real time is provided for the object which carries out voice communication based on the shared document, so that each object can know the change and speaking state of the participant in the voice communication in time, the efficiency of voice communication and collaboration based on the shared document is further improved, and the man-machine interaction efficiency is further greatly improved.
Function option 2, invite option.
In some embodiments, the voice call toolbar includes an invite option, the terminal is capable of displaying address information of the first shared document and a permission setting option of an object to be invited for the first shared document in response to a trigger operation of the invite option, and further sending an invite request to the object to be invited, the invite request being used for inviting the object to be invited to join the first voice call of the first shared document based on a setting operation of the permission setting option and the address information.
In some embodiments, the triggering operation of the invite option refers to the description of the triggering operation in step 202, and is not described herein.
In some embodiments, the address information may be a document link of the first shared document, the document link pointing to a browse page of the first shared document. By sending the document link to the object to be invited, the object to be invited can directly jump to the first shared document, and then the object to be invited automatically decides whether to join the first voice call based on the first shared document.
In other embodiments, the address information may be a call link of the first voice call, the call link pointing to a page of the first voice call. By sending the call link to the object to be invited, the object to be invited can directly join the first voice call.
In some embodiments, the plurality of rights provided by the rights setting option can be divided based on the document service (i.e., function) provided by the first shared document. In some embodiments, the first shared document provides editing, browsing, and comment functions, and the permission setting option can provide editing permission, browsing permission, comment permission, and the like for the first shared document.
In other embodiments, the rights provided by the rights setting option can be determined based on the rights of the initiating object (or participating object) to the first shared document. For example, if the initiating object has editing rights and comment rights for the first shared document, the initiating object can select the editing rights and/or comment rights in the rights setting option as the rights of the object to be invited to the first shared document.
In some embodiments, the setting operation is a selection operation of at least one of the plurality of rights of the rights setting option, that is, the setting operation may be a multi-choice operation.
In some embodiments, the invite option is only to a specified one of the initiating object and the participating object of the first voice call. In some embodiments, the specified object is an object having editing rights for the first shared document; in other embodiments, the specified object is an object in a specified group, e.g., a created group of objects of the first shared document, which is not limited by the present disclosure.
In other embodiments, the object to be invited may be an object in a target group, e.g., an object in a group of objects that possess target rights for the first shared document. In this example, the right set based on the right setting option can be superimposed on the target right as the right of the object to be invited to the first shared document.
In some embodiments, the invite request indicates a document title of the first shared document to indicate to the object to be invited the shared document in question for the incoming voice call.
In order to facilitate understanding of the foregoing invitation option and the manner of displaying the permission setting option, the present disclosure provides a schematic diagram of the invitation option, referring to fig. 9, in which, in response to a triggering operation of the invitation option 901 in the voice call toolbar, the terminal displays the permission setting option 902, the permission setting option 902 displays prompt information "can join a voice call by sharing the following information", and the permission setting option 902 displays address information "XXX … …" of the first shared document; the rights settings option 902 provides multiple rights "browsable, commentary editable and no rights" for the first shared document; in response to a setting operation 903 for this rights setting option 902, the currently determined rights are "browsable", wherein a copy button 904 for copying the address information to generate an invitation instruction is also displayed.
Through the technical scheme, a convenient invitation function is provided for the objects which carry out voice communication based on the shared document, so that each object can share the shared document and the corresponding voice communication in real time according to the communication requirement, and a rich permission setting mode is provided, so that the efficiency of carrying out voice communication and collaboration based on the shared document is further improved, and the man-machine interaction efficiency is greatly improved.
Function option 3, microphone state setting option.
In some embodiments, the voice call toolbar includes a microphone state setting option, and the terminal is capable of setting the microphone of the local device to a corresponding state in response to a triggering operation of the microphone state setting option.
The home terminal device is the terminal.
In some embodiments, the triggering operation of the microphone state setting option refers to the description of the triggering operation in step 202, which is not described herein.
In the embodiment of the disclosure, in the case that the microphone is in an on state, the terminal switches the microphone to an off state in response to a trigger operation of a microphone state setting option; in the case that the microphone is in the off state, the terminal switches the microphone to the on state in response to a trigger operation of setting an option to the microphone state; the microphone is in an open state, and then the object corresponding to the local terminal equipment is in a speaking state; and if the microphone is in a closed state, the object corresponding to the local terminal equipment is in a non-speaking state.
The disclosed embodiments provide a schematic diagram of a microphone setting option, see fig. 10, in which a terminal sets a microphone to a response state in response to a trigger operation of a microphone state setting option 1001 in the voice call toolbar; referring to fig. 10 (a), the microphone is in an open state; referring to fig. 10 (b), the microphone is in a closed state.
Through the technical scheme, a convenient wheat opening and closing function is provided for the objects which carry out voice communication based on the shared document, so that each object can adjust the opening and closing state of the current microphone in real time according to communication requirements, the efficiency of voice communication and cooperation based on the shared document is further improved, and the man-machine interaction efficiency is greatly improved.
Function option 4, audio device setup option.
In some embodiments, the voice call toolbar includes an audio device setting option, and the terminal is capable of setting an audio device employed by the first voice call on the local device in response to a setting operation for the audio device setting option.
In some embodiments, the audio device includes an audio input device and an audio output device. The audio input device may be a microphone and the audio output device may be a speaker. In some embodiments, the audio device may function as both an audio input device and an audio output device, such as headphones with a microphone.
In some embodiments, the audio setting option is used to select among a plurality of audio devices provided by the terminal. In some embodiments, the audio device includes an audio input device and an audio output device, the audio setting option providing selection functionality for the audio input device and the audio output device, respectively, in different drop-down bars. The terminal is capable of displaying a plurality of selectable microphone lines in a drop-down bar corresponding to the audio input device and a plurality of selectable speaker lines in a drop-down bar corresponding to the audio output device; the setting operation for the audio device setting option may be a selected operation for any one of the speaker lines and/or the microphone lines in a drop-down bar provided for the audio setting option.
The disclosed embodiments provide a schematic view of an audio device setup option, see fig. 11, wherein a terminal displays a setup panel 1102 of an audio device in response to a trigger operation of the audio device setup option 1101 in the voice call toolbar, wherein a microphone line 1 is selected for an audio input device; for the audio output device, among the speaker line 1, the speaker line 2, and the speaker line 3 provided in the pull-down bar, the speaker line 1 is selected.
Through the technical scheme, abundant audio equipment setting functions are provided for the objects which carry out voice communication based on the shared document, so that each object can adjust the audio equipment line in real time according to the configuration of equipment, the efficiency of voice communication and collaboration based on the shared document is improved, and the man-machine interaction efficiency is greatly improved.
Function option 5, call end option.
In some embodiments, the voice call toolbar includes a call end option, and the terminal is capable of ending the first voice call in response to a trigger operation of the call end option.
In some embodiments, only the originating object of the first voice call is able to end the first voice call by a trigger operation of the call end option.
The embodiment of the disclosure provides a schematic diagram of a call ending option, referring to fig. 12, in which, in response to a triggering operation of a call ending option 1201 in the voice call toolbar, a terminal displays a prompt 1202 "after exiting, voice call will end, and confirms that the voice call ends", and if the "confirm" option is selected, the first voice call ends.
Through the technical scheme, in the process of carrying out voice call based on the shared document, various communication requirements, adjustment requirements and cooperation requirements in the voice call process can be met, a convenient function inlet can be provided, rich voice call functions are provided in the shared document in a concise and efficient display mode, and the man-machine interaction efficiency is greatly improved.
In step 405, during the process of the first voice call, the terminal displays, in the first shared document, a view angle border based on the document browsing view angle of the initiating object of the first voice call, where the view angle border is used to indicate the document area browsed by the initiating object.
In some embodiments, the document viewing perspective is used to provide a real-time usage status of the first shared document by the initiating object. The document viewing angle refers to a document viewing progress or a document region of interest or the like (e.g., a document region displayed on a terminal that initiates an object), wherein the document region may be represented in a coordinate form.
In some embodiments, after the initiating object initiates the voice call based on the first shared document on the terminal, in the process of the voice call, the terminal synchronizes the document browsing view angle of the initiating object to the terminal participating in the voice call, so as to improve the efficiency of the voice call. Of course, the document browsing view angle can also include a document area where the editing operation of the initiating object on the first shared document is located, and the like, so that the participating object can timely see the editing operation of the initiating object, and the efficiency of voice communication is further improved. For the participating objects in the view following state, the terminal initiating the object can synchronize the document browsing view angles of the participating objects, and for the participating objects which are not in or exit from the view following state, the terminal initiating the object can stop synchronizing the document browsing view angles of the participating objects so as to reduce signaling interaction and ensure the stability of document sharing.
In the embodiment of the disclosure, based on the document browsing view angle, the real-time use state of the initiating object on the first shared document can be synchronously displayed in the process of voice communication, so that the aim of collaboration based on the shared document is fulfilled.
In some embodiments, the view frame is capable of displaying based on a variety of display elements, such as color lines or a sparkle effect, which is not limited by the present disclosure.
In the embodiment of the disclosure, the view angle frame can clearly indicate the document area browsed by the initiating object, so that the communication cost among a plurality of objects is reduced, the communication efficiency based on the shared document in the voice communication process is improved, and the man-machine interaction efficiency is improved.
In step 406, during the process of the first voice call, the terminal displays, in the first shared document, view angle following information of at least one participant in the first voice call, where the view angle following information is used to indicate whether the participant follows a document browsing view angle of an initiating object of the first voice call.
In some embodiments, the terminal displays, in the first shared document, view-angle follow-up information of at least one participant object of the first voice call and a follow-up control option for setting a follow-up state of the participant object. Based on the above, the initiating object can acquire the following state of each participating object in the voice call in real time through the visual angle following information, so that the following state of the participating object is set through the following control option according to the communication requirement, and the efficiency of the voice call based on the shared document is maintained.
In the embodiment of the disclosure, the terminal further displays view following information of the initiating object, where the view following information of the initiating object is used to indicate the number of participating objects currently following the document browsing view of the initiating object.
In some embodiments, the process of initiating the object setting the following state of the object may include cases 1 and 2 described below.
In case 1, if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an on function, the terminal responds to the triggering operation of the following control option to control the participating object to follow the document browsing view angle of the initiating object.
In some embodiments, for the above case 1, the terminal, in response to the triggering operation of the following control option, can timely set the following state of any participating object under the condition that the participating object does not follow the document browsing view angle of the initiating object, so as to ensure consistency of each object to the shared document browsing view angle, and effectively ensure accuracy of information transmission between each object. Through the efficient management mode, communication efficiency in the process of carrying out voice communication based on the shared document is effectively improved.
And 2, if the view angle following information of the participating object is in a following state, displaying the following control option as an exiting function, and responding to the triggering operation of the following control option, controlling the participating object to exit the following of the document browsing view angle of the initiating object.
In some embodiments, for the above case 2, the terminal, in response to the triggering operation of the following control option, can set the following state of any participating object according to the requirement under the condition that the participating object is following the document browsing view angle of the initiating object, so as to flexibly adjust the sharing document browsing view angle of each object under various communication scenes, and further ensure the flexibility of information transmission between each object.
Through the technical scheme, in a mode of following the view angle of document browsing, the method and the device can fully ensure the free use of each participating object on the shared document while effectively maintaining the accuracy of information transmission in the process of voice communication, effectively improve the communication efficiency of voice communication based on the shared document in an efficient collaboration mode, and greatly improve the man-machine interaction efficiency.
For ease of understanding, the present disclosure provides a schematic diagram of view follower information and follower control options, see fig. 13, wherein view follower information 1310 "following a presenter view" of a participant indicates that the participant is in a follower state, the follower control option 1311 of the participant is displayed as an exit function; the view follower information 1320 "not follower presenter view" of the participant object indicates that the participant object is in a non-follower state, and the following control option 1321 of the participant object is displayed as an on function; the view follower information 1330 of the initiating object is "you are conference moderator, 3 are following your view of the document browsing".
In some embodiments, in the terminal where the participant is located, the display effect of the viewing angle following information of the participant can change along with the following state change of the participant on the initiating object. Illustratively, in the course of the perspective follower information of the participating object changing from the following state to the non-following state in response to the triggering operation of the follower control option by the participating object, the perspective follower information of the participating object (see the icon shown at 1310 in fig. 13) can be changed from color to gray. Accordingly, in the course of changing from the non-following state to the following state, the viewing angle following information (see the icon shown in 1310 in fig. 13) of the participating object can be changed from gray to color.
In step 407, during the process of the first voice call, the terminal displays a cursor of an initiating object of the first voice call in the first shared document, and displays a cursor of a following object in at least one participating object of the first voice call.
Wherein the cursor is used to indicate document content for which the object is in the first shared document. In some embodiments, the terminal determines a position of a cursor in the first shared document based on the input device, e.g., from a pointer file of a mouse; in some embodiments, the terminal determines the position of the cursor in response to a triggering operation on its screen.
Wherein the following object refers to a participating object that follows the document browsing perspective of the initiating object.
In some embodiments, in the terminal where the initiating object is located, a cursor of the initiating object of the first voice call is displayed; in the terminal where the following object is located, a cursor of the initiating object and a cursor of the following object are displayed;
in other embodiments, the cursor of the initiating object and the cursor of the following object of the first voice call are displayed in the terminal of the initiating object, so that the content positions of the plurality of objects in the first shared document can be displayed in the terminal of the initiating object, the multi-object interaction function is realized in the content of the shared document, and the communication and collaboration efficiency of each object in the voice call process based on the shared document is further improved.
In some embodiments, the cursor of the initiating object and the perspective follower information of the follower object can be displayed based on the associated display element. For example, the cursor of the initiating object and the viewing angle following information of the following object can be displayed in the same or similar following theme color or theme effect. In other embodiments, the following subject color can be randomly varied among a plurality of colors, which is not limited by the present disclosure.
In other embodiments, the cursor following the object can be displayed based on a different color than the cursor initiating the object to distinguish between indications of the first shared document by different objects.
The disclosed embodiments provide a schematic diagram of a display effect of a cursor and view angle following information, see fig. 14, wherein in a display effect example 1401, view angle following information 1402 of a following object is displayed with a cursor 1403 of an initiating object and an object icon 1404 (of a participating object or an initiating object) based on a following theme color 1405 of black; in the display effect example 1405, the view following information 1406 of the following object is displayed with the cursor 1407 of the initiating object and the object icon 1408 (of the participating object or the initiating object) based on the following theme color of the color (indicated by diagonal line fill).
Further, based on the above steps 404 to 407, the disclosure provides a schematic diagram of a shared document in a voice call process, referring to fig. 15, where fig. 15 is a display effect of a terminal where an initiating object is located; wherein a view border 1501 (represented by a black bolded line) is displayed in the first shared document; the first shared document has view angle following information 1502 of the initiating object displayed therein; also displayed in the first shared document is a voice call toolbar 1503 (introduction to step 404); the cursor 1504 of the initiating object is also displayed in the first shared document.
The present disclosure provides another schematic view of a shared document in a voice call process, referring to fig. 16, where fig. 16 is a display effect of a terminal where a participation object is located; wherein a view angle frame 1601 (represented by a black bolded frame line) is displayed in the first shared document; viewing angle following information 1602 of the participating object is displayed in the first shared document, the 1602 indicating a document viewing angle at which the participating object follows the initiating object; also displayed in the first shared document is a voice call toolbar 1603 (introduction to step 404); also displayed in the first shared document are a cursor 1604 of the initiating object and a cursor 1605 of the participating object.
The display modes shown in fig. 7 to 16 can be applied to a PC device, and in other embodiments, for the case that the terminal is a mobile terminal device, the display modes provided in fig. 17 to 22 can be also implemented as follows. Note that the display modes shown in fig. 7 to 16 can be applied to a mobile terminal device, and the display modes shown in fig. 17 to 22 described below can be applied to a PC terminal device, which is not limited in this disclosure.
The present disclosure provides a schematic illustration of a voice call toolbar, see fig. 17, which includes a speaking object icon 1701, an invite option 1702, a microphone state setting option 1703, an audio device setting option 1704, and a call end option 1705, the speaking object icon 1701 indicating that the object name currently being spoken is "object AA"; wherein 1706 is view follower information of the initiating object.
The present disclosure provides a schematic diagram of an invite option and an object display option, see fig. 18, wherein the introduction of the voice call toolbar may refer to fig. 16; in response to a triggering operation of the invite option 1801, the voice call toolbar jumps to an invite panel, an object display option 1803 is displayed above the invite panel 1802, and a display element of the object display option 1803 includes an avatar of an object currently speaking and a number of persons "4" participating in the first voice call; in response to a trigger operation to the object display option 1803, jumping to display object icons 1804 of the participation object a, the participation object B, and the participation object C, and microphone identifications 1805 of the participation object a, the participation object B, and the participation object C; wherein, the participation object A and the participation object C are in a speaking state, and the participation object B is in a non-speaking state; wherein, the invitation panel displays a permission setting option 1806, the permission setting option 1806 displays prompt information "can join a voice call by sharing the following information", and the permission setting option 1806 displays address information "XXX … …" of the first shared document; the rights settings option 1806 provides various rights "browsable, commentary editable" for the first shared document, and a copy button 1807 for copying the address information to generate an invitation instruction is also displayed in the invitation panel, and the 1808 is view follower information of the initiating object.
The disclosed embodiments provide a schematic diagram of a microphone setting option, see fig. 19, wherein a terminal sets a microphone to a response state in response to a trigger operation of a microphone state setting option 1901 in the voice call toolbar; referring to fig. 19 (a), the microphone is in an open state; referring to fig. 19 (b), the microphone is in a closed state.
The embodiment of the present disclosure provides a schematic view of an audio device setting option, referring to fig. 20, in which a terminal displays a setting panel 2002 of an audio device in response to a trigger operation of the audio device setting option 2001 in the voice call toolbar, wherein a microphone line 1 and a microphone line 2 are provided in the setting panel 2002 for an audio input device, and the microphone line 1 is selected; for the audio output apparatus, a speaker line 1 and a speaker line 2 are provided in the setting panel 2002, the speaker line 1 being selected; the 2003 is view following information of the initiating object.
The embodiment of the disclosure provides a schematic diagram of a call ending option, referring to fig. 21, in which, in response to a triggering operation of a call ending option 2101 in the voice call toolbar, a terminal displays a prompt message 2102 "after exiting, voice call will end, and confirms that the voice call will end", and ends the first voice call when a "confirm" option is selected; this 2103 is view follower information of the initiating object.
The present disclosure provides a schematic diagram of a shared document in a voice call process, referring to fig. 22, where fig. 22 (a) is a display effect of a terminal where an initiating object is located; wherein the first shared document has view angle following information 2201 of the initiating object displayed therein; also displayed in the first shared document is a voice call toolbar 2202 (described with reference to step 404); referring to fig. 22 (b), the display effect of the terminal in which the participation object is located; wherein, the first shared document displays view following information and following control options 2203 of the participating object, and the 2203 indicates the participating object to follow the document browsing view of the initiating object; also displayed in the first shared document is a voice call toolbar 2204 (described with reference to step 404).
The above embodiment corresponding to fig. 4 describes the process of making a voice call based on a shared document from the perspective of an originating object, and next, by some embodiments, the process of joining an ongoing voice call based on a shared document from the perspective of a participating object is described in detail. Fig. 23 is a flowchart showing a method of voice call sharing a document, which is performed by a terminal, as shown in fig. 23, according to an exemplary embodiment, the method including the following steps 2301 to 2306.
Step 2301, the terminal displays a voice call option of the second shared document based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are engaged in a second voice call.
In some embodiments, the terminal may display the voice call option based on the second shared document in a different manner, with reference to display mode one and display mode two described below.
And displaying the voice call option of the second shared document on the second shared document by the terminal in the first display mode.
In some embodiments, the terminal provides a functional portal directly joining the second voice call for an object that is browsing the second shared document by displaying a voice call option on the second shared document. In some embodiments, the voice call option can also indicate a number of objects that are engaged in the second voice call. For ease of understanding, the disclosure provides a schematic diagram of a voice call option, referring to fig. 23, where the terminal displays the second shared document 2401 (the document title and the document content are shown in the drawing), in a button column above the second shared document 2401, a voice call option 2402 is displayed, and the rest of icons in the interface are described with reference to fig. 5, which is not repeated herein.
The second display mode is that the terminal displays the voice call identifier on the document label of the second shared document in the shared document list; based on the triggering operation of the document tag, a voice call option of the second shared document is displayed, the voice call identifier indicating that a plurality of objects of the second shared document are engaged in a second voice call.
Wherein the shared document list has a plurality of shared documents arranged in the form of document tags. In some embodiments, the shared document list provides a functional portal to jump to the shared document in the form of a document tag.
In some embodiments, the document tags in the shared document list may display the directory information of the shared document layer by layer in the form of a directory tree, and the terminal may ensure that the voice call identifier can be directly displayed in the shared document list by displaying the voice call identifier in the top-level tag corresponding to the root directory of the second shared document, thereby efficiently indicating the shared document in which the voice call is being performed, without having to check whether the second shared document is currently in the voice call through more operations.
In some embodiments, the terminal can implement a process of displaying the voice call option of the second shared document based on the triggering operation of the document tag in various manners, that is, the terminal can provide various functional portals for joining the voice call based on the document tag. The terminal is capable of displaying a functional interface of the second shared document in response to a triggering operation of the document tag, the functional interface including a voice call icon for providing the voice call option and a skip icon for skipping to the second shared document.
Function entry 1, voice call icon.
In some embodiments, the terminal can provide a voice call icon directly in the functional interface that can indicate that the second voice call is in progress and provide a voice call option for joining the second voice call. Based on the method, a function entry for displaying the shared document by one key and joining the voice call can be provided for the participated object, so that the operation required for joining the voice call is reduced, and the efficiency of carrying out the voice call is improved.
Function entry 2, jump icon.
Wherein the skip icon is used to skip to the second shared document.
In some embodiments, the terminal can display the second shared document in response to the trigger operation of the skip icon, and further display a voice call option (which is similar to the display manner) on the second shared document based on the second shared document. Based on the method, a separation selectable mode of browsing the shared document and joining the voice call can be provided for the participated object, so that the flexibility of carrying out the voice call based on the shared document is further improved.
In order to facilitate understanding of the second display manner, the present disclosure provides a schematic view of another voice call option, referring to fig. 25, where in the shared document list 2501, a plurality of document tags are displayed in an array, and in the top-level tag 2502 of the document tag 3 where the second shared document that is in a voice call is located, a voice call identifier 2503 is displayed, where the voice call identifier 2503 indicates that the shared document under the top-level tag 2502 is in a voice call; in response to a trigger operation to the top-level tab 2502 of the document tab, a function panel 2504 is displayed, the function panel 2504 including a document title of the second shared document, a voice call icon 2505 for providing a voice call option, and a skip icon 2506.
The display manner provided in fig. 26 can be applied to a PC device, and in other embodiments, for a case where the terminal is a mobile terminal device, all or part of the functions provided in the second display manner may be implemented in the manner provided in fig. 26 below. The present disclosure provides a schematic view of another voice call option, referring to fig. 26, in which a plurality of document tags are displayed in an aligned manner in the shared document list 2601, wherein a voice call identifier 2603 is displayed in a top-level tag 2602 of a document tag in which a second shared document that is being subjected to a voice call is located, and the voice call identifier 2603 indicates that the shared document is being subjected to a voice call under the top-level tag 2602; a function panel 2604 is displayed in response to a trigger operation to the top tab 2602 of the document tab, the function panel 2604 including a document title of the second shared document, and a voice call icon 2605 for providing a voice call option is displayed in response to a trigger operation to the function panel 2604.
It should be noted that, in some embodiments, the display manner provided in fig. 25 may also be applied to a mobile terminal device, and the display manner provided in fig. 26 may also be applied to a PC terminal device, which is not limited in this disclosure.
And 2302, the terminal responds to the triggering operation of the voice call option to join the second voice call.
The description of the triggering operation refers to the description of the triggering operation in step 202, which is not described herein.
In some embodiments, the terminal displays the voice call option based on the function entry 1 in the above display mode two, and in this example, the terminal can display the second shared document and join the second voice call in response to a trigger operation on the voice call icon.
In step 2303, the terminal displays a voice call toolbar at the target location of the second shared document during the second voice call, where the voice call toolbar is used to implement a plurality of voice call functions.
In some embodiments, the voice call toolbar includes at least one of an object display option, an invite option, a microphone state setting option, an audio device setting option, and a call end option, thereby providing a plurality of voice call functions based on the plurality of function options. The speech toolbar is described with reference to step 404, and is not described in detail herein. The voice call toolbar includes a call ending option for providing a function of pushing out the second voice call for a participating object, and it can be understood that the participating object is different from an initiating object serving as a management role of the second voice call, and that the participating object exits from the voice call and does not cause the voice call to end. In this example, the terminal where the participant is located in the embodiment of the disclosure can exit the second voice call in response to the triggering operation of the call end option.
In some embodiments, the initiating object of the second voice call and the object icon of the participating object are displayed in different manners to distinguish between the initiating object sharing the view of document browsing and the participating object following the view of document browsing of the initiating object.
And 2304, the terminal displays the second shared document based on the document browsing view angle of the initiating object of the second voice call.
In some embodiments, the terminal displays a view border in the second shared document based on a document view of an initiating object of the second voice call, the view border being used to indicate a document area browsed by the initiating object. The relevant description of the view frame and the document browsing view is referred to step 405, and will not be described herein.
In some embodiments, the terminal defaults to follow the originating object's view of the document browsing when joining the second voice call.
In some embodiments, in the case where the participating object follows the document browsing perspective of the initiating object, the terminal is further capable of displaying the document region of the second shared document browsed by the participating object itself in a display region other than the perspective border. In other embodiments, the terminal may also be capable of displaying other shared documents browsed by the participant in a display area other than the view frame, which is not limited by the present disclosure.
In step 2305, the terminal displays view angle following information of the participant object of the local terminal and a following control option, where the following control option is used to set a following state of the participant object in the second shared document.
The home terminal is the terminal where the participation object is located.
In some embodiments, the process of setting the following state with reference to the object may include case 1 and case 2 described below.
In case 1, if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an on function, the terminal can respond to the triggering operation of the following control option to follow the document browsing view angle of the initiating object.
And 2, if the view angle following information of the participating object is in a following state and the following control option is displayed as an exiting function, the terminal can respond to the triggering operation of the following control option to exit the following of the document browsing view angle of the initiating object.
In the embodiment of the present disclosure, the display manner of the view angle following information and the following control option of the participant refers to step 406 and fig. 13, which are not described herein.
According to the technical scheme, the terminal responds to the triggering operation of the following control option, the function of setting the following state of the initiating object can be provided for any participating object, so that the participating object can freely switch the document browsing view angle, the operability of each object on the shared document browsing view angle is ensured, each object in the voice call can freely use the shared document while carrying out the voice call, and the man-machine interaction efficiency is effectively improved.
In step 2306, during the second voice call, the terminal displays a cursor of the initiation target of the second voice call and a cursor of the participation target in the second shared document.
This step refers to step 407.
In some embodiments, the terminal displays, for the participating object, the content position targeted by the initiating object in the second shared document by displaying the cursor of the initiating object and the cursor of the participating object, thereby implementing a multi-object interaction function in the content of the shared document, and further improving the efficiency of communication and collaboration of each object in the process of performing voice communication based on the shared document.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document when carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
Furthermore, a plurality of functional entries for joining the voice call are provided for the participating objects, various scenes for carrying out the voice call based on the shared document are fully covered, and the man-machine interaction efficiency is further improved.
Fig. 27 is a block diagram illustrating a voice call apparatus sharing a document according to an exemplary embodiment. Referring to fig. 27, the apparatus includes:
a display unit 2701 configured to display a call initiation option on a first shared document for providing a document service for a plurality of objects;
an initiating unit 2702 configured to perform a trigger operation in response to the call initiation option, initiate a voice call request based on at least one target object of the first shared document, the target object being an object that is using the first shared document;
the call unit 2703 is configured to perform a first voice call in a case where any one of the target objects accepts the voice call request.
In one possible implementation, the initiating unit 2702 is configured to perform:
responding to the triggering operation of the call initiation option, and initiating a voice call request to all target objects of the first shared document;
or alternatively, the first and second heat exchangers may be,
and in response to the triggering operation of the call initiation option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and in response to the selection operation of part of target objects in all target objects, initiating the voice call request to the part of target objects.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a tool display unit configured to display a voice call toolbar for implementing a plurality of voice call functions at a target position of the first shared document during the progress of the first voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
and an utterance display unit configured to display an icon to be uttered in the first voice call at a specified position of the voice call toolbar.
In one possible embodiment, the voice call toolbar includes an object display option, and the voice call apparatus of the shared document further includes:
and an object display unit configured to perform an object icon displaying a plurality of participation objects of the first voice call in response to a trigger operation of the object display option.
In one possible implementation, the voice call toolbar includes an invite option, and the voice call apparatus of the shared document further includes:
an invitation unit configured to perform a triggering operation in response to the invitation option, displaying address information of the first shared document and a right setting option for the first shared document for an object to be invited;
And sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
In one possible embodiment, the voice call toolbar includes a microphone state setting option, and the voice call apparatus of the shared document further includes:
and a microphone state setting unit configured to perform a trigger operation in response to the microphone state setting option to set the microphone of the local device to a corresponding state.
In one possible implementation, the voice call toolbar includes an audio device setting option, and the voice call apparatus of the shared document further includes:
an audio device setting unit configured to perform a setting operation of setting an audio device employed by the first voice call on the local device in response to the setting operation of the audio device setting option.
In one possible implementation, the voice call toolbar includes a call end option, and the voice call device of the shared document further includes:
and an ending unit configured to perform a trigger operation in response to the call ending option to end the first voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a view angle display unit configured to perform, in the first shared document, displaying view angle following information of at least one participant object of the first voice call, the view angle following information indicating whether the participant object follows a document browsing view angle of an originating object of the first voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
a view angle control unit configured to perform, in the first shared document, displaying view angle following information of at least one participant object of the first voice call and a following control option for setting a following state of the participant object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the participating object is controlled to follow the document browsing view angle of the initiating object in response to the triggering operation of the following control option;
and if the view angle following information of the participating object is in a following state, the following control option is displayed as an exiting function, and the participating object is controlled to exit the following of the document browsing view angle of the initiating object in response to the triggering operation of the following control option.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a bezel display unit configured to perform a document browsing view based on an originating object of the first voice call, display a view bezel in the first shared document, the view bezel indicating a document area browsed by the originating object.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a cursor display unit configured to execute a cursor for displaying an originating object of the first voice call in the first shared document, and a cursor for following an object in the participating objects of the first voice call.
In one possible implementation manner, the object icons of the initiation object and the participation object of the first voice call are displayed in different manners.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document when carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
It should be noted that: in the document sharing voice call apparatus provided in the above embodiment, when performing the corresponding steps, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the voice call device for sharing the document provided in the above embodiment belongs to the same concept as the voice call method embodiment for sharing the document, and the specific implementation process of the voice call device for sharing the document is detailed in the method embodiment, which is not described herein again.
Fig. 28 is a block diagram illustrating a voice call apparatus sharing a document according to an exemplary embodiment. Referring to fig. 28, the apparatus includes:
a display unit 2801 configured to execute a voice call option based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are conducting a second voice call;
the joining call unit 2802 is configured to perform a joining of the second voice call in response to a trigger operation for the voice call option.
In one possible embodiment, the display unit 2801 includes:
and the first display module is configured to be executed on the second shared document and display voice call options of the second shared document.
In one possible embodiment, the display unit 2801 includes:
a second display module configured to perform displaying a voice call identifier on a document tag of the second shared document in the shared document list, the voice call identifier indicating that a plurality of objects of the second shared document are engaged in a second voice call;
and displaying the voice call option of the second shared document based on the triggering operation of the document tag.
In one possible implementation, the second display module is configured to perform:
responding to the triggering operation of the document label, displaying a functional interface of the second shared document, wherein the functional interface comprises a voice call icon and a skip icon, the voice call icon is used for providing the voice call option, and the skip icon is used for skipping to the second shared document;
responding to the triggering operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the call join module 2802 is configured to perform:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a tool display unit configured to display a voice call toolbar for implementing a plurality of voice call functions at a target position of the second shared document during the progress of the second voice call.
In one possible implementation, the voice call toolbar includes a call end option, and the apparatus further includes:
And an exit unit configured to perform an exit from the second voice call in response to a trigger operation to the call end option.
In one possible implementation manner, the voice call device for sharing a document further includes:
and a view angle display unit configured to perform a document browsing view angle based on an initiation object of the second voice call, and display the second shared document.
In one possible implementation manner, the voice call device for sharing a document further includes:
a view angle control unit configured to execute, in the second shared document, display view angle following information of a participating object of a home terminal and a following control option for setting a following state of the participating object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the document browsing view angle of the initiating object is followed in response to the triggering operation of the following control option;
and if the view angle following information of the participating object is in a following state, displaying the following control option as an exiting function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view angle of the initiating object.
In one possible implementation manner, the object icons of the initiation object and the participation object of the second voice call are displayed in different manners.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document when carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
It should be noted that: in the document sharing voice call apparatus provided in the above embodiment, when performing the corresponding steps, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the voice call device for sharing the document provided in the above embodiment belongs to the same concept as the voice call method embodiment for sharing the document, and the specific implementation process of the voice call device for sharing the document is detailed in the method embodiment, which is not described herein again.
In an embodiment of the present disclosure, there is also provided an electronic device including a processor and a memory for storing at least one computer program loaded and executed by the processor to implement the above-described voice call method for sharing a document. The electronic device can be implemented as the terminal described above. Fig. 29 is a block diagram illustrating a configuration of a terminal according to an exemplary embodiment, referring to fig. 29, a terminal 2900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. The terminal 2900 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal 2900 includes: a processor 2901 and a memory 2902.
The processor 2901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 2901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing ), FPGA (Field-Progra mmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 2901 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2901 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 2901 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 2902 may include one or more computer-readable storage media, which may be non-transitory. Memory 2902 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2902 is used to store at least one program code for execution by processor 2901 to implement processes performed by a terminal in a method of voice call of a shared document provided by an embodiment of a method in the present disclosure.
In some embodiments, the terminal 2900 may also optionally include: a peripheral interface 2903, and at least one peripheral. The processor 2901, memory 2902, and peripheral interface 2903 may be connected by a bus or signal line. Individual peripheral devices may be connected to peripheral device interface 2903 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2904, a display screen 2905, a camera assembly 2906, an audio circuit 2907, a positioning assembly 2908, and a power source 2909.
Peripheral interface 2903 may be used to connect at least one Input/Output (I/O) related peripheral to processor 2901 and memory 2902. In some embodiments, the processor 2901, memory 2902, and peripheral interface 2903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2901, memory 2902, and peripheral interface 2903 may be implemented on separate chips or circuit boards, as the disclosed embodiments are not limited in this regard.
The Radio Frequency circuit 2904 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. In some embodiments, radio frequency circuit 2904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 2904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 2904 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited by the present disclosure.
The display screen 2905 is used to display a UI (User Interface, user page). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2905 is a touch display, the display 2905 also has the ability to collect touch signals at or above the surface of the display 2905. The touch signal may be input as a control signal to the processor 2901 for processing. At this point, the display 2905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 2905 may be one, disposed on the front panel of the terminal 2900; in other embodiments, the display 2905 may be at least two, each disposed on a different surface of the terminal 2900 or in a folded design; in other embodiments, the display 2905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 2900. Even more, the display 2905 may be configured in a non-rectangular, irregular pattern, i.e., a shaped screen. The display 2905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 2906 is used to capture images or video. In some embodiments, camera assembly 2906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 2906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 2907 may include a microphone and a speaker. The microphone is used to collect sound waves of a user and the environment, and convert the sound waves into electrical signals for input to the processor 2901 for processing, or input to the radio frequency circuit 2904 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 2900, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 2901 or the radio frequency circuit 2904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 2907 may also include a headphone jack.
The positioning component 2908 is used to position the current geographic location of the terminal 2900 to enable navigation or LBS (Location Based Service, location-based services).
The power supply 2909 is used to power the various components in the terminal 2900. The power source 2909 may be alternating current, direct current, disposable or rechargeable. When the power source 2909 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 2900 further includes one or more sensors 2910. The one or more sensors 2910 include, but are not limited to: acceleration sensor 2911, gyroscope sensor 2912, pressure sensor 2913, fingerprint sensor 2914, optical sensor 2915, and proximity sensor 2916.
The acceleration sensor 2911 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 2900. For example, the acceleration sensor 2911 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 2901 may control the display screen 2905 to display the user page in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2911. Acceleration sensor 2911 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 2912 may detect a body direction and a rotation angle of the terminal 2900, and the gyro sensor 2912 may collect a 3D motion of the user on the terminal 2900 in cooperation with the acceleration sensor 2911. The processor 2901 may perform the following functions based on the data collected by the gyro sensor 2912: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 2913 may be disposed at a side frame of the terminal 2900 and/or at a lower layer of the display screen 2905. When the pressure sensor 2913 is disposed at a side frame of the terminal 2900, a grip signal of the terminal 2900 by a user may be detected, and the processor 2901 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 2913. When the pressure sensor 2913 is disposed at the lower layer of the display screen 2905, the processor 2901 controls the operability control on the UI page according to the pressure operation of the user on the display screen 2905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
Fingerprint sensor 2914 is used to capture a user's fingerprint, and the identity of the user is identified by processor 2901 from the fingerprint captured by fingerprint sensor 2914, or by fingerprint sensor 2914 from the captured fingerprint. Upon identifying the user's identity as a trusted identity, the processor 2901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, and the like. The fingerprint sensor 2914 may be disposed on the front, back, or side of the terminal 2900. When a physical key or vendor Logo is provided on the terminal 2900, the fingerprint sensor 2914 may be integrated with the physical key or vendor Logo.
The optical sensor 2915 is used to collect ambient light intensity. In one embodiment, the processor 2901 may control the display brightness of the display screen 2905 based on the intensity of ambient light collected by the optical sensor 2915. Specifically, when the ambient light intensity is high, the display luminance of the display screen 2905 is turned high; when the ambient light intensity is low, the display luminance of the display screen 2905 is turned down. In another embodiment, the processor 2901 may also dynamically adjust the capture parameters of the camera assembly 2906 based on the intensity of ambient light collected by the optical sensor 2915.
The proximity sensor 2916, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 2900. The proximity sensor 2916 is used to collect the distance between the user and the front of the terminal 2900. In one embodiment, when the proximity sensor 2916 detects a gradual decrease in the distance between the user and the front face of the terminal 2900, the processor 2901 controls the display 2905 to switch from the bright screen state to the off screen state; when the proximity sensor 2916 detects that the distance between the user and the front surface of the terminal 2900 gradually increases, the processor 2901 controls the display 2905 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 29 is not limiting of the terminal 2900 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer-readable storage medium including program code, such as the memory 2902 including program code, executable by the processor 2901 of the terminal 2900 to complete the voice call method of the shared document is also provided. Alternatively, the computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Compact-Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided that includes one or more instructions that are executable by one or more processors of an electronic device to enable the electronic device to perform the above-described method of sharing a document for voice conversation.
In some embodiments, the computer program related to the embodiments of the present disclosure may be deployed to be executed on one computer device or on multiple computer devices located at one site, or alternatively, may be executed on multiple computer devices distributed across multiple sites and interconnected by a communication network, where the multiple computer devices distributed across multiple sites and interconnected by a communication network may constitute a blockchain system.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (24)

1. A method of voice telephony for sharing documents, the method comprising:
displaying a call initiation option on a first shared document, wherein the first shared document is used for providing document services for a plurality of objects;
responding to triggering operation of the call initiation option, and initiating a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
Under the condition that any target object accepts the voice call request, performing a first voice call;
displaying view angle following information of at least one participating object of the first voice call in the first shared document, wherein the view angle following information is used for indicating whether the participating object follows a document browsing view angle of an initiating object of the first voice call or not, and the document browsing view angle is used for providing a real-time use state of the initiating object on the first shared document;
when the participation object follows the document browsing view angle of the initiation object, displaying a cursor of the initiation object and a cursor of the participation object in a terminal of the participation object; and displaying a view angle border in the first shared document based on the document browsing view angle of the initiating object, wherein the view angle border is used for indicating a document area browsed by the initiating object.
2. The method for voice call of a shared document according to claim 1, wherein said initiating a voice call request based on at least one target object of the first shared document in response to a triggering operation of the call initiation option comprises:
Responding to the triggering operation of the call initiation options, and initiating a voice call request to all target objects of the first shared document;
or alternatively, the first and second heat exchangers may be,
and responding to the triggering operation of the call initiation option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and responding to the selection operation of part of target objects in all target objects, and initiating the voice call request to the part of target objects.
3. The method of sharing a document for voice call according to claim 1, further comprising:
and displaying a voice call toolbar at a target position of the first shared document in the process of carrying out the first voice call, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
4. The method for voice call of a shared document according to claim 3, further comprising:
and displaying the icon of the object which is speaking in the first voice call at the appointed position of the voice call toolbar.
5. The method of claim 3, wherein the voice call toolbar includes an object display option, the method further comprising:
And responding to the triggering operation of the object display options, and displaying object icons of a plurality of participating objects of the first voice call.
6. The method of sharing a document of claim 3, wherein the voice call toolbar includes an invite option, the method further comprising:
responding to triggering operation of the invitation option, and displaying address information of the first shared document and permission setting options of an object to be invited for the first shared document;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join in the first voice call of the first shared document.
7. The method of sharing a document of claim 3, wherein the voice call toolbar includes a microphone state setting option, the method further comprising:
and setting the microphone of the local terminal equipment to a corresponding state in response to the triggering operation of the microphone state setting option.
8. The method of sharing a document of claim 3, wherein the voice call toolbar includes an audio device setting option, the method further comprising:
And setting the audio equipment adopted by the first voice call on the local equipment in response to the setting operation of the audio equipment setting option.
9. The method of sharing a document of claim 3, wherein the voice call toolbar includes a call end option, the method further comprising:
and responding to the triggering operation of the call ending option, and ending the first voice call.
10. The method of sharing a document for voice call according to claim 1, further comprising:
displaying view angle following information and following control options of at least one participation object of the first voice call in the first shared document, wherein the following control options are used for setting the following state of the participation object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the participating object is controlled to follow the document browsing view angle of the initiating object in response to the triggering operation of the following control option;
and if the visual angle following information of the participation object is in a following state, the following control option is displayed as an exit function, and the participation object is controlled to exit the following of the document browsing visual angle of the initiating object in response to the triggering operation of the following control option.
11. The method of sharing a document for voice call according to claim 1, further comprising:
and displaying a cursor of an initiating object of the first voice call in the first shared document, and displaying a cursor of a following object in the participating objects of the first voice call.
12. The method for voice call sharing a document according to claim 1, wherein the object icons of the initiation object and the participation object of the first voice call are displayed in different manners.
13. A method of voice telephony for sharing documents, the method comprising:
based on the second shared document, displaying a voice call option of the second shared document, the voice call option indicating that a plurality of objects of the second shared document are engaged in a second voice call;
responding to the triggering operation of the voice call options, and adding the second voice call;
displaying the second shared document based on a document browsing view of the initiating object of the second voice call, the document browsing view being used to provide a real-time use state of the initiating object for the second shared document; displaying a view angle border in the second shared document based on the document browsing view angle of the initiating object, wherein the view angle border is used for indicating a document area browsed by the initiating object; and displaying the cursor of the initiating object and the cursor of the participating object of the local end in the second shared document.
14. The method for voice call of a shared document according to claim 13, wherein the displaying the voice call option of the second shared document based on the second shared document comprises:
and displaying voice call options of the second shared document on the second shared document.
15. The method for voice call of a shared document according to claim 13, wherein the displaying the voice call option of the second shared document based on the second shared document comprises:
displaying a voice call identifier on a document label of the second shared document in the shared document list, wherein the voice call identifier indicates that a plurality of objects of the second shared document are in a second voice call;
and displaying the voice call option of the second shared document based on the triggering operation of the document tag.
16. The method for voice call of shared document according to claim 15, wherein said displaying the voice call option of the second shared document based on the triggering operation of the document tag comprises:
responding to the triggering operation of the document label, displaying a functional interface of the second shared document, wherein the functional interface comprises a voice call icon and a skip icon, the voice call icon is used for providing the voice call option, and the skip icon is used for skipping to the second shared document;
Responding to the triggering operation of the skip icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the responding to the triggering operation of the voice call option, the joining the second voice call comprises the following steps:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
17. The method for voice call of shared documents as claimed in claim 13, further comprising:
and displaying a voice call toolbar at a target position of the second shared document in the process of carrying out the second voice call, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
18. The method of sharing a document of claim 17, wherein the voice call toolbar includes a call end option, the method further comprising:
and responding to the triggering operation of the call ending option, and exiting the second voice call.
19. The method for voice call of shared documents as claimed in claim 13, further comprising:
In the second shared document, displaying view angle following information and following control options of a participating object of a local terminal, wherein the following control options are used for setting the following state of the participating object;
if the view angle following information of the participating object is in a non-following state, the following control option is displayed as an opening function, and the document browsing view angle of the initiating object is followed in response to a triggering operation of the following control option;
and if the view angle following information of the participating object is in a following state, displaying the following control option as an exiting function, and responding to the triggering operation of the following control option, exiting the following of the document browsing view angle of the initiating object.
20. The method for voice call sharing a document according to claim 13, wherein the object icons of the initiation object and the participation object of the second voice call are displayed in different manners.
21. A voice telephony device for sharing documents, the device comprising:
a display unit configured to execute on a first shared document for providing a document service for a plurality of objects, displaying a call initiation option;
An initiating unit configured to perform a trigger operation in response to the call initiation option, initiate a voice call request based on at least one target object of the first shared document, the target object being an object that is using the first shared document;
a call unit configured to perform a first voice call in a case where any one of the target objects accepts the voice call request;
a view angle display unit configured to perform, in the first shared document, display view angle following information of at least one participant object of the first voice call, the view angle following information being used to indicate whether the participant object follows a document browsing view angle of an initiating object of the first voice call, the document browsing view angle being used to provide a real-time use state of the first shared document by the initiating object;
the apparatus is further configured to perform displaying a cursor of the initiating object and a cursor of the participating object in a terminal of the participating object when the participating object follows a document view of the initiating object; and displaying a view angle border in the first shared document based on the document browsing view angle of the initiating object, wherein the view angle border is used for indicating a document area browsed by the initiating object.
22. A voice telephony device for sharing documents, the device comprising:
a display unit configured to execute a second sharing document based on which a voice call option indicating that a plurality of objects of the second sharing document are in progress for a second voice call is displayed;
a joining call unit configured to perform joining of the second voice call in response to a trigger operation for the voice call option;
the apparatus is further configured to perform a document browsing perspective based on an initiating object of the second voice call, display the second shared document, the document browsing perspective being for providing a real-time usage status of the second shared document by the initiating object; displaying a view angle border in the second shared document based on the document browsing view angle of the initiating object, wherein the view angle border is used for indicating a document area browsed by the initiating object; and displaying the cursor of the initiating object and the cursor of the participating object of the local end in the second shared document.
23. An electronic device, the electronic device comprising:
one or more processors;
A memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of voice telephony of a shared document of any of claims 1 to 20.
24. A computer readable storage medium, characterized in that program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the method of voice telephony sharing a document according to any of claims 1 to 20.
CN202210976170.8A 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document Active CN115348240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210976170.8A CN115348240B (en) 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210976170.8A CN115348240B (en) 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document

Publications (2)

Publication Number Publication Date
CN115348240A CN115348240A (en) 2022-11-15
CN115348240B true CN115348240B (en) 2023-11-21

Family

ID=83951644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210976170.8A Active CN115348240B (en) 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document

Country Status (1)

Country Link
CN (1) CN115348240B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109669924A (en) * 2018-12-24 2019-04-23 天津字节跳动科技有限公司 Sharing method, device, electronic equipment and the storage medium of online document
CN109800594A (en) * 2018-12-14 2019-05-24 平安普惠企业管理有限公司 Document access authority management method, device and computer equipment
CN109976617A (en) * 2019-04-03 2019-07-05 腾讯科技(深圳)有限公司 Document display method and apparatus
CN111144074A (en) * 2018-11-05 2020-05-12 腾讯科技(深圳)有限公司 Document cooperation method and device, computer readable storage medium and computer equipment
CN114371896A (en) * 2021-12-30 2022-04-19 北京字跳网络技术有限公司 Prompting method, device, equipment and medium based on document sharing
CN114398858A (en) * 2022-01-06 2022-04-26 腾讯科技(深圳)有限公司 Document display method, related device, equipment and storage medium
CN114461580A (en) * 2021-12-23 2022-05-10 北京达佳互联信息技术有限公司 Online document sharing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144074A (en) * 2018-11-05 2020-05-12 腾讯科技(深圳)有限公司 Document cooperation method and device, computer readable storage medium and computer equipment
CN109800594A (en) * 2018-12-14 2019-05-24 平安普惠企业管理有限公司 Document access authority management method, device and computer equipment
CN109669924A (en) * 2018-12-24 2019-04-23 天津字节跳动科技有限公司 Sharing method, device, electronic equipment and the storage medium of online document
CN109976617A (en) * 2019-04-03 2019-07-05 腾讯科技(深圳)有限公司 Document display method and apparatus
CN113157168A (en) * 2019-04-03 2021-07-23 腾讯科技(深圳)有限公司 Document display method and device
CN114461580A (en) * 2021-12-23 2022-05-10 北京达佳互联信息技术有限公司 Online document sharing method and device, electronic equipment and storage medium
CN114371896A (en) * 2021-12-30 2022-04-19 北京字跳网络技术有限公司 Prompting method, device, equipment and medium based on document sharing
CN114398858A (en) * 2022-01-06 2022-04-26 腾讯科技(深圳)有限公司 Document display method, related device, equipment and storage medium

Also Published As

Publication number Publication date
CN115348240A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN111078655B (en) Document content sharing method, device, terminal and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN112788359B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN112118477A (en) Virtual gift display method, device, equipment and storage medium
CN115378900A (en) Song list sharing method, device, terminal and storage medium
CN111901658A (en) Comment information display method and device, terminal and storage medium
CN113709022B (en) Message interaction method, device, equipment and storage medium
CN111126958B (en) Schedule creation method, schedule creation device, schedule creation equipment and storage medium
CN113204672B (en) Resource display method, device, computer equipment and medium
CN111953852B (en) Call record generation method, device, terminal and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN115348240B (en) Voice call method, device, electronic equipment and storage medium for sharing document
CN114826799B (en) Information acquisition method, device, terminal and storage medium
CN114245218A (en) Audio and video playing method and device, computer equipment and storage medium
CN114100121A (en) Operation control method, device, equipment, storage medium and computer program product
CN115240821A (en) Image display method, image display device, computer equipment and storage medium
CN113518198A (en) Session interface display method, conference interface display method and device and electronic equipment
CN114816187B (en) Control display method, device, computer equipment and medium
CN113220203B (en) Activity entry display method, device, terminal and storage medium
CN114826800B (en) Information acquisition method, device, terminal and storage medium
CN114466237B (en) Display method, display device, computer equipment and medium
CN111372132B (en) Method, device and equipment for audio and video transmission and storage medium
CN115190341A (en) Interaction method and device of media resources, electronic equipment and storage medium
CN116684371A (en) Service function access method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant