CN115348240A - Voice call method and device for shared document, electronic equipment and storage medium - Google Patents

Voice call method and device for shared document, electronic equipment and storage medium Download PDF

Info

Publication number
CN115348240A
CN115348240A CN202210976170.8A CN202210976170A CN115348240A CN 115348240 A CN115348240 A CN 115348240A CN 202210976170 A CN202210976170 A CN 202210976170A CN 115348240 A CN115348240 A CN 115348240A
Authority
CN
China
Prior art keywords
voice call
document
option
shared document
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210976170.8A
Other languages
Chinese (zh)
Other versions
CN115348240B (en
Inventor
马秋晨
付硕
朱龙
陈加新
赵伊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210976170.8A priority Critical patent/CN115348240B/en
Publication of CN115348240A publication Critical patent/CN115348240A/en
Application granted granted Critical
Publication of CN115348240B publication Critical patent/CN115348240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/401Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
    • H04L65/4015Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Abstract

The disclosure relates to a voice call method, a voice call device, electronic equipment and a storage medium for sharing a document, and belongs to the technical field of networks. In the embodiment of the disclosure, a call initiation option can be displayed on a shared document, so that in response to a trigger operation on the call initiation option, a voice call request is initiated to at least one object using the shared document to perform a voice call based on the shared document; accordingly, the present disclosure can display a voice call option for joining a voice call based on a shared document in which a voice call is in progress, so that an object can conveniently join the voice call of the shared document. Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document while carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.

Description

Voice call method and device for shared document, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a method and an apparatus for voice call of a shared document, an electronic device, and a storage medium.
Background
With the development of network technology, a plurality of users can communicate in a form of initiating an audio and video conference, and in the process of the audio and video conference, the users can also initiate sharing of documents, so that other users participating in the audio and video conference can see the shared documents, and accordingly, the users can know corresponding conference contents. However, when the documents are shared in such audio-video conferences, other participants can only see the part of the document that the initiating user wants to show, and cannot perform effective human-computer interaction, so as to achieve the purposes of browsing the documents and the like.
The above document sharing mode based on the audio and video conference has low human-computer interaction efficiency, and therefore, a mode with high human-computer interaction efficiency is urgently needed to realize the audio and video conference.
Disclosure of Invention
The invention provides a voice call method, a voice call device, electronic equipment and a storage medium for sharing a document, so that a plurality of objects of the shared document can carry out voice call based on the shared document, and the man-machine interaction efficiency is effectively improved. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for sharing a document by voice call, the method including:
displaying a call initiation option on a first shared document, wherein the first shared document is used for providing document service for a plurality of objects;
responding to the triggering operation of the call initiating option, and initiating a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
and under the condition that any one target object receives the voice call request, carrying out a first voice call.
In one possible embodiment, the initiating a voice call request based on at least one target object of the first shared document in response to the triggering operation of the call initiation option includes:
responding to the triggering operation of the call initiating option, and initiating a voice call request to all target objects of the first shared document;
or the like, or, alternatively,
and responding to the triggering operation of the call initiating option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and responding to the selection operation of a part of target objects in all the target objects, and initiating the voice call request to the part of target objects.
In one possible embodiment, the method further comprises:
and in the process of the first voice call, displaying a voice call toolbar at the target position of the first shared document, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
In one possible embodiment, the method further comprises:
and displaying an object icon which is speaking in the first voice call on a designated position of the voice call toolbar.
In one possible implementation, the voice call toolbar includes an object display option, the method further including:
and in response to the triggering operation of the object display option, displaying object icons of a plurality of participation objects of the first voice call.
In one possible implementation, the voice call toolbar includes an invite option, the method further including:
responding to the triggering operation of the invitation option, and displaying the address information of the first shared document and the permission setting option of the object to be invited aiming at the first shared document;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
In one possible implementation, the voice call toolbar includes a microphone status setting option, the method further comprising:
and in response to the triggering operation of the microphone state setting option, setting the microphone of the local terminal equipment to be in a corresponding state.
In one possible embodiment, the voice call toolbar includes audio device setup options, the method further comprising:
and responding to the setting operation of the audio equipment setting option, and setting the audio equipment adopted by the first voice call on the local terminal equipment.
In one possible embodiment, the voice call toolbar includes an end-of-call option, the method further comprising:
and in response to the triggering operation of the call ending option, ending the first voice call.
In one possible embodiment, the method further comprises:
and displaying the view following information of at least one participant object of the first voice call in the first shared document, wherein the view following information is used for indicating whether the participant object follows the document browsing view of the initiator object of the first voice call.
In one possible embodiment, the method further comprises:
displaying, in the first shared document, perspective following information and following control options of at least one participating object of the first voice call, the following control options being used to set a following state of the participating object;
if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and controlling the participating object to follow the document browsing view angle of the initiating object;
and if the view angle following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and controlling the participating object to exit the following of the document browsing view angle of the initiating object.
In one possible embodiment, the method further comprises:
and displaying a view angle frame in the first shared document based on the document browsing view angle of the initiating object of the first voice call, wherein the view angle frame is used for indicating a document area browsed by the initiating object.
In one possible embodiment, the method further comprises:
and displaying a cursor of an initiating object of the first voice call and a cursor of a following object in a participating object of the first voice call in the first shared document.
In one possible embodiment, the object icons of the initiating object and the participating object of the first voice call are displayed differently.
According to a second aspect of the embodiments of the present disclosure, there is provided a voice call method for sharing a document, the method including:
displaying a voice call option of a second shared document based on the second shared document, wherein the voice call option indicates that a plurality of objects of the second shared document are carrying out a second voice call;
and responding to the triggering operation of the voice call option, and joining the second voice call.
In one possible implementation, the displaying the voice call option of the second shared document based on the second shared document includes:
and displaying the voice call option of the second shared document on the second shared document.
In one possible implementation, the displaying the voice call option of the second shared document based on the second shared document includes:
displaying a voice call identifier on a document tag of the second shared document in the shared document list, wherein the voice call identifier indicates that a plurality of objects of the second shared document are carrying out a second voice call;
and displaying the voice call option of the second shared document based on the triggering operation of the document tag.
In one possible implementation, the displaying the voice call option of the second shared document based on the triggering operation on the document tag includes:
responding to the triggering operation of the document label, displaying a function interface of the second shared document, wherein the function interface comprises a voice call icon and a jump icon, the voice call icon is used for providing the voice call option, and the jump icon is used for jumping to the second shared document;
responding to the triggering operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the joining the second voice call in response to the triggering operation of the voice call option includes:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
In one possible embodiment, the method further comprises:
and in the process of carrying out the second voice call, displaying a voice call toolbar at the target position of the second shared document, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
In one possible embodiment, the voice call toolbar includes an end-of-call option, the method further comprising:
and exiting the second voice call in response to the triggering operation of the call ending option.
In one possible embodiment, after joining the second voice call in response to the triggering operation of the voice call option, the method further includes:
and displaying the second shared document based on the document browsing view angle of the initiating object of the second voice call.
In one possible embodiment, the method further comprises:
displaying, in the second shared document, view following information and a following control option of a participating object of the home terminal, the following control option being used for setting a following state of the participating object;
if the view following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and following the document browsing view of the initiating object;
and if the view following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view of the initiating object.
In one possible embodiment, the object icons of the initiating object and the participating object of the second voice call are displayed differently.
According to a third aspect of the embodiments of the present disclosure, there is provided a voice call apparatus for sharing a document, the apparatus including:
the display unit is configured to execute displaying a call initiating option on a first shared document, wherein the first shared document is used for providing document service for a plurality of objects;
the initiating unit is configured to execute triggering operation responding to the call initiating option, and initiate a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
and the call unit is configured to execute the first voice call under the condition that any target object accepts the voice call request.
In one possible embodiment, the initiating unit is configured to perform:
responding to the triggering operation of the call initiating option, and initiating a voice call request to all target objects of the first shared document;
or the like, or, alternatively,
and responding to the triggering operation of the call initiating option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and responding to the selection operation of a part of target objects in all the target objects, and initiating the voice call request to the part of target objects.
In one possible embodiment, the voice call device for sharing a document further includes:
and the tool display unit is configured to display a voice call tool bar at the target position of the first shared document in the process of the first voice call, wherein the voice call tool bar is used for realizing a plurality of voice call functions.
In one possible embodiment, the voice call device for sharing a document further includes:
and an utterance display unit that displays an object icon that is uttered in the first voice call at a designated position of the voice call toolbar.
In one possible embodiment, the voice call toolbar includes an object display option, and the voice call apparatus for sharing a document further includes:
and an object display unit configured to perform an object icon display of a plurality of participation objects of the first voice call in response to a trigger operation of the object display option.
In one possible embodiment, the voice call toolbar includes an invite option, and the voice call apparatus for sharing a document further includes:
the invitation unit is configured to execute triggering operation responding to the invitation option and display the address information of the first shared document and the permission setting option of the object to be invited aiming at the first shared document;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
In one possible implementation, the voice call toolbar includes a microphone status setting option, and the voice call apparatus for sharing a document further includes:
and the microphone state setting unit is configured to execute triggering operation responding to the microphone state setting option and set the microphone of the local terminal equipment to be in a corresponding state.
In one possible embodiment, the voice call toolbar includes audio device setup options, and the voice call apparatus for sharing documents further includes:
and the audio device setting unit is configured to execute setting operation responding to the audio device setting option and set the audio device adopted by the first voice call on the local terminal device.
In one possible embodiment, the voice call toolbar includes a call end option, and the voice call apparatus for sharing documents further includes:
and the ending unit is configured to execute triggering operation responding to the call ending option and end the first voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
and the visual angle display unit is configured to display visual angle following information of at least one participant of the first voice call in the first shared document, wherein the visual angle following information is used for indicating whether the participant follows a document browsing visual angle of an initiating object of the first voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
a view angle control unit configured to perform displaying, in the first shared document, view angle following information and a following control option of at least one participant object of the first voice call, the following control option being used to set a following state of the participant object;
if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and controlling the participating object to follow the document browsing view angle of the initiating object;
and if the view angle following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and controlling the participating object to exit the following of the document browsing view angle of the initiating object.
In one possible embodiment, the voice call device for sharing a document further includes:
and the frame display unit is configured to execute a document browsing view angle based on the initiating object of the first voice call, and display a view angle frame in the first shared document, wherein the view angle frame is used for indicating a document area browsed by the initiating object.
In one possible embodiment, the voice call device for sharing a document further includes:
and the cursor display unit is configured to display a cursor of an initiating object of the first voice call and display a cursor of a following object in a participating object of the first voice call in the first shared document.
In one possible embodiment, the object icons of the initiating object and the participating object of the first voice call are displayed differently.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a voice call apparatus for sharing a document, the apparatus including:
a display unit configured to perform displaying a voice call option of a second shared document based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are making a second voice call;
and the joining call unit is configured to execute the triggering operation of the voice call option and join the second voice call.
In one possible embodiment, the display unit includes:
a first display module configured to execute on the second shared document, displaying a voice call option of the second shared document.
In one possible embodiment, the display unit includes:
a second display module configured to execute displaying a voice call identifier on a document tag of the second shared document in the shared document list, the voice call identifier indicating that a plurality of objects of the second shared document are in a second voice call;
and displaying the voice call option of the second shared document based on the triggering operation of the document tag.
In one possible embodiment, the second display module is configured to perform:
responding to the triggering operation of the document label, displaying a function interface of the second shared document, wherein the function interface comprises a voice call icon and a jump icon, the voice call icon is used for providing the voice call option, and the jump icon is used for jumping to the second shared document;
responding to the triggering operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the call joining module is configured to perform:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
and the tool display unit is configured to display a voice call tool bar at the target position of the second shared document in the process of the second voice call, wherein the voice call tool bar is used for realizing a plurality of voice call functions.
In one possible implementation, the voice call toolbar includes a call end option, the apparatus further includes:
and the exit unit is configured to execute the triggering operation responding to the call ending option and exit the second voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
and a viewing angle display unit configured to perform a document browsing viewing angle based on the originating object of the second voice call, and display the second shared document.
In one possible embodiment, the voice call device for sharing a document further includes:
a visual angle control unit configured to execute displaying visual angle following information and following control options of a participant object of a local terminal in the second shared document, the following control options being used for setting a following state of the participant object;
if the view following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and following the document browsing view of the initiating object;
and if the view following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view of the initiating object.
In one possible embodiment, the object icons of the initiating object and the participating object of the second voice call are displayed differently.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
one or more processors;
a memory for storing the processor executable program code;
wherein the processor is configured to execute the program code to implement the voice call method for sharing a document provided in the first aspect or the second aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium including: the program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the method of sharing a document for voice call provided by the first aspect or the second aspect described above.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product, comprising one or more instructions, which when executed by one or more processors of an electronic device, enables the electronic device to perform the method for voice call of sharing a document as provided in the first aspect or the second aspect.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document under the condition of instant communication, and the man-machine interaction efficiency is effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an implementation environment for a voice call method for sharing documents according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of voice calling for sharing a document in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of voice call sharing a document in accordance with an exemplary embodiment;
FIG. 4 is a flow diagram illustrating a method of voice calling for sharing a document in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a call initiation option in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating another call initiation option in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram illustrating a voice call toolbar according to an exemplary embodiment;
FIG. 8 is a diagram illustrating an object display option in accordance with an illustrative embodiment;
FIG. 9 is a diagram illustrating an invitation option in accordance with an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating one microphone setup option in accordance with an exemplary embodiment;
FIG. 11 is a schematic diagram illustrating one audio device setup option in accordance with an exemplary embodiment;
FIG. 12 is a schematic diagram illustrating an end-of-call option in accordance with an exemplary embodiment;
FIG. 13 is a schematic diagram illustrating view following information and following control options in accordance with an exemplary embodiment;
FIG. 14 is a diagram illustrating the display effect of a cursor and view following information, according to an exemplary embodiment;
FIG. 15 is a diagram illustrating sharing of a document during a voice call in accordance with an exemplary embodiment;
FIG. 16 is a diagram illustrating sharing a document during another voice call in accordance with an illustrative embodiment;
FIG. 17 is a schematic diagram illustrating a voice call toolbar according to an exemplary embodiment;
FIG. 18 is a schematic diagram illustrating an invitation option and an object display option in accordance with an exemplary embodiment;
FIG. 19 is a schematic diagram illustrating one microphone setup option in accordance with an exemplary embodiment;
FIG. 20 is a schematic diagram illustrating one audio device setup option in accordance with an exemplary embodiment;
FIG. 21 is a schematic diagram illustrating an end-of-call option in accordance with an exemplary embodiment;
FIG. 22 is a diagram illustrating sharing of a document during a voice call in accordance with an illustrative embodiment;
FIG. 23 is a flow diagram illustrating a method of voice calling for sharing a document in accordance with an exemplary embodiment;
FIG. 24 is a schematic diagram illustrating a voice call option in accordance with an exemplary embodiment;
FIG. 25 is a schematic diagram illustrating another voice call option in accordance with an exemplary embodiment;
FIG. 26 is a schematic diagram illustrating another voice call option in accordance with an exemplary embodiment;
FIG. 27 is a block diagram of a voice call device sharing a document shown in accordance with an exemplary embodiment;
FIG. 28 is a block diagram of a voice communicator sharing a document shown in accordance with an exemplary embodiment;
fig. 29 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this disclosure are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data requires compliance with relevant laws and regulations and standards in relevant countries and regions. For example, shared documents, groups, object names, etc. referred to in this disclosure are obtained with sufficient authorization.
Fig. 1 is a schematic diagram of an implementation environment of a voice call method for sharing a document according to an embodiment of the present disclosure, and referring to fig. 1, the implementation environment includes: a plurality of terminals 101 and a server 102.
In the embodiment of the present disclosure, the terminal 101 can display a shared document in an interactive interface and display a call initiation option on the shared document to provide a function of performing a voice call based on the shared document. Accordingly, the terminal 101 can also display a voice call option based on the shared document of the ongoing voice call, thereby providing a function of instantly joining the voice call. Wherein the shared document is used to provide document services for a plurality of objects. In some embodiments, the objects corresponding to the plurality of terminals 101 can use a document service provided by a shared document through a network. In some embodiments, the document services include an editing service, a browsing service, and a recording service for the shared document, which are not limited by this disclosure. In some embodiments, the terminal 101 runs an application supporting the use of the shared document, for example, an online document application, through which a user can browse or edit the shared document online and collaborate with a plurality of objects using the shared document.
In some embodiments, the application may be a client application installed on the terminal 101, may be a web page (web) application accessed through a browser running on the terminal 101, and may also be other types of applications, for example, a micro application running within the client application based on web technology, which is not limited by the disclosure.
In some embodiments, the terminal 101 may be at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, a laptop computer, and the like, and the terminal 101 has a communication function and can access the internet. The terminal 101 may be generally referred to as one of a plurality of terminals, and the embodiment is only illustrated by the terminal 101.
The server 102 is configured to provide background services related to shared documents for the terminal 101, for example, a service for storing shared documents; a consistency maintenance service for shared documents when multi-object collaborating; a communication connection service for voice call, and the like.
In some embodiments, the server 102 may be an independent physical server, a server cluster or a distributed file system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, a cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. Of course, the server 102 may also include other functional servers to provide more comprehensive and diversified services, which are not limited by the present disclosure.
The server 102 and the plurality of terminals 101 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present disclosure. Alternatively, the number of the above-mentioned terminals 101 and servers 102 may be more or less, and the embodiment of the disclosure does not limit this.
Next, based on the above implementation environment, a technical solution provided by the embodiment of the present disclosure is introduced.
Fig. 2 is a flowchart illustrating a voice call method for sharing a document according to an exemplary embodiment, which can be performed by a terminal in the above implementation environment as illustrated in fig. 2, and includes the following steps 201 to 203.
In step 201, the terminal displays a call initiation option on a first shared document, where the first shared document is used for providing document services for a plurality of objects.
The first shared document is used for providing document service for a plurality of objects. In some embodiments, the first shared Document may be in a plurality of Document formats, for example, a PDF (Portable Document Format) Document, a text Document, a table Document, or a presentation Document, which is not limited in this disclosure. In other embodiments, the first shared document may also be obtained based on a document template, for example, a meeting summary template document, which is not limited by this disclosure.
In some embodiments, the document services may include an editing service, a browsing service, and a recording service for shared documents, which are not limited by this disclosure. The editing service means that the content of the shared document can be modified by using the object of the shared document; the browsing service is that the object using the shared document can freely browse the content of the shared document; the recording service is to record edits occurring in the shared document.
In some embodiments, the plurality of objects can use a document service provided by the first shared document through a network, based on which the plurality of objects using the first shared document can collaborate online.
Wherein the call initiation option is used to initiate a voice call to the first shared document. In some embodiments, the terminal displays the first shared document in response to an open operation for the first shared document, and displays the call initiation option on the first shared document. In some embodiments, the terminal can display the call initiation option above the first shared document, and the disclosure does not limit the location where the call initiation option is displayed.
In step 202, the terminal responds to the triggering operation of the call initiating option, and initiates a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document.
In the embodiment of the present disclosure, the trigger operation may include various types according to the interactive form provided by the terminal. In some embodiments, the terminal is a Personal Computer (PC) device such as a desktop Computer or a laptop Computer, and the triggering operation may be a click operation on the call initiation option through a mouse or a keyboard according to an interaction form provided by the PC device through an input device; in other embodiments, the terminal is a mobile terminal device such as a smart phone or a tablet computer, and the triggering operation may be a click operation on the call initiation option on a touchable screen according to an interaction form provided by the mobile terminal device through the touchable screen; in still other embodiments, the triggering operation may be determined by motion capture, voice recognition, or gaze tracking, etc., based on more interactive forms provided by the terminal, which is not limited by the present disclosure.
In some embodiments, the case where the target object is using the first shared document may include: the first shared document is being browsed; the first shared document is being edited; the first shared document is being reviewed.
In some embodiments, the voice call request can be displayed in a popup in the first shared document of any of the target objects; in other embodiments, the voice call request may be pushed to the target object in the form of a request message, which is not limited in this disclosure.
In step 203, the terminal performs a first voice call when any one of the target objects accepts the voice call request.
In some embodiments, when any of the target objects accepts the voice call request, the target object can perform voice communication with the originating object of the first voice call (i.e., the object corresponding to the terminal) and other target objects receiving the voice call request in the first shared document.
Through the technical scheme, the objects can conveniently initiate voice calls based on the shared document, so that a plurality of objects can carry out voice calls based on the shared document, each object in the voice calls can freely use the shared document while carrying out voice calls, frequent switching between the shared document and the voice calls is not needed, and the man-machine interaction efficiency is effectively improved.
The embodiment corresponding to fig. 2 briefly introduces the process of initiating a voice call based on a shared document in the technical solution provided by the embodiment of the present disclosure, and then introduces the process of joining an ongoing voice call based on a shared document in the technical solution provided by the embodiment of the present disclosure based on the implementation environment and the embodiment corresponding to fig. 2.
Fig. 3 is a flowchart illustrating a voice call method for sharing a document according to an exemplary embodiment, which can be performed by a terminal in the above-described implementation environment, as illustrated in fig. 3, and includes the following steps 301 to 302.
In step 301, the terminal displays a voice call option of the second shared document based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are making a second voice call.
The introduction of the second shared document refers to the introduction of the first shared document in step 201, and is not described herein again.
In some embodiments, the terminal displays the second shared document and displays the voice call option on the second shared document in response to an open operation for the second shared document. In some embodiments, the terminal can display the voice call option above the second shared document, and the disclosure does not limit the position where the voice call option is displayed.
In some embodiments, the terminal displays the voice call option and indicates the plurality of objects in the form of icons that are conducting the second voice call.
In step 302, the terminal responds to the triggering operation of the voice call option and joins the second voice call.
For the voice call option, the triggering operation refers to the description of the triggering operation in step 302, which is not described herein again.
Through the technical scheme provided by the embodiment of the disclosure, the objects can conveniently add the ongoing voice call based on the shared document, so that a plurality of objects can carry out the voice call based on the shared document, each object in the voice call can freely use the shared document while carrying out the voice call, frequent switching between the shared document and the voice call is not needed, and the human-computer interaction efficiency is effectively improved.
The above fig. 2 and fig. 3 are only basic processes of the present disclosure, and the scheme provided by the present disclosure is further explained below.
First, a process of performing a voice call based on a shared document in the technical solution provided by the present disclosure is described in detail through some embodiments. Fig. 4 is a flowchart illustrating a voice call method for sharing a document, which is performed by a terminal as shown in fig. 4, according to an exemplary embodiment, and includes the following steps 401 to 407.
In step 401, the terminal displays a call initiation option on a first shared document, where the first shared document is used to provide document services for a plurality of objects.
In this step, reference is made to step 201, which is not described herein again.
In the embodiment of the present disclosure, the initiating object refers to an object that initiates a first voice call for the first shared document through the terminal, and the participating object refers to an object other than the initiating object among a plurality of objects that participate in the first voice call. In some embodiments, the initiating object may be referred to as a moderator of the first voice call and the participating object may be referred to as a participant of the first voice call.
In some embodiments, the terminal displays, in a display area around the call initiation option, the number of target objects to indicate to the initiating object the number of objects that are currently capable of initiating a voice call, wherein the target objects are objects that are using the first shared document. To facilitate understanding of the above display modes, the present disclosure provides an illustration of a call initiation option. Referring to fig. 5, in which the terminal displays the first shared document 501 (document title and document content are shown in the figure), in the button bar above the first shared document 501, a call initiation option 502 is displayed, on the right side of the call initiation option 502, 4 avatar icons 503 are displayed, the avatar icons 503 indicate 4 target objects browsing the first shared document, 6 in the rightmost numeric icon 504 indicates that a total of 6 objects are currently browsing the first shared document, and the left part of the button bar also displays directory information 505 of the first shared document; the right portion of the button bar also provides further functionality options 506 for sharing, searching, and messaging.
The display method provided in fig. 5 can be applied to a PC device, and in other embodiments, for a case where the terminal is a mobile device, the step 401 can be implemented in the manner provided in fig. 6 described below. Fig. 6 is a schematic diagram of another call initiation option provided by the present disclosure, referring to fig. 6, where the terminal displays the first shared document 601 (a document title and a document content are shown in the figure), in a bottom expansion panel of the first shared document 601, a call initiation option 602 is displayed, and below the call initiation option 602, further function options 603 such as edit, comment, and copy links are provided. Wherein the bottom expansion panel pops up in response to the operation of the more buttons 604 in the button bar above the first shared document 601.
It should be noted that, in some embodiments, the display manner provided in fig. 5 may also be applied to a mobile terminal device, and the display manner provided in fig. 6 may also be applied to a PC terminal device, which is not limited in this disclosure.
In step 402, the terminal initiates a voice call request based on at least one target object of the first shared document in response to the triggering operation of the call initiation option, where the target object is an object that is using the first shared document.
This step refers to step 202. In some embodiments, the terminal may implement this step 402 based on the following one or two ways.
In the first mode, the terminal responds to the triggering operation of the call initiating option and initiates a voice call request to all target objects of the first shared document.
Wherein the all target objects refer to all objects that are using the first shared document. In some embodiments, taking the display manner of the call initiation option provided in fig. 5 as an example, the terminal can respond to the triggering operation of the call initiation option 502 to initiate a voice call request to 6 requests indicated in fig. 5 that are browsing the first shared document.
By the method, the voice call can be conveniently initiated to all target objects by one key, and a method for quickly initiating voice collaboration based on shared documents is provided.
The second method comprises the following steps: and the terminal responds to the triggering operation of the call initiating option, displays a target object list, wherein the target object list comprises all target objects of the first shared document, and responds to the selection operation of part of the target objects in all the target objects to initiate the voice call request to the part of the target objects.
In some embodiments, the terminal can respond to the trigger operation to display all the target objects browsing the first shared document in a form of a target object list, so that the initiating object can select some target objects according to the actual requirement of the call.
In some embodiments, the use status of the first shared document by all target objects is displayed in the target object list, so as to provide a reference for the initiator object to select a part of the target objects. In some embodiments, the usage status includes: is browsing, editing, or commenting, as this disclosure does not limit.
By the method, the voice call initiating selection function specific to any object is further provided on the basis of providing the shared document-based quick voice call initiating function, so that the technical scheme disclosed by the invention can be flexibly applied to various demand scenes.
In step 403, the terminal performs a first voice call when any one of the target objects accepts the voice call request.
In this step, refer to step 203, which is not described herein.
Next, steps 404 to 407 are described with respect to various functions and display manners involved in the first voice call proceeding process, respectively, and it should be noted that the following steps 404 to 407 are not performed sequentially.
In step 404, during the first voice call, the terminal displays a voice call toolbar at a target position of the first shared document, where the voice call toolbar is used to implement a plurality of voice call functions.
In some embodiments, the target location may be a blank area in the first shared document, and may also be a blank area around the first shared document, which is not limited by this disclosure.
In some embodiments, the terminal displays an object icon that is speaking in the first voice call at a designated position of the voice call toolbar. In some embodiments, the designated location may be above the voice call toolbar. Optionally, the object icon displays the head portrait or the name of the object. In other embodiments, the terminal displays a talk flag in the talk object icon to indicate that a talk is taking place. Therefore, the speaking object can be indicated in real time, and the communication efficiency in the voice call process based on the document is effectively improved.
In some embodiments, the object icons of the initiating object and the participating objects of the first voice call are displayed differently to distinguish the initiating object sharing the document viewing perspective from the participating objects following the document viewing perspective of the initiating object.
In some embodiments, the voice call toolbar includes at least one of an object display option, an invite option, a microphone status setting option, an audio device setting option, and a call end option to provide a plurality of voice call functions based on the plurality of function options.
The present disclosure provides a schematic diagram of a voice call toolbar, referring to fig. 7, the voice call toolbar 700 includes an object display option 701, an invite option 702, a microphone state setting option 703, an audio device setting option 704, and a call end option 705; above the voice call toolbar, an object icon 706 that is speaking is displayed, and the utterance flag 707 indicates that an object named "object AA" is speaking.
Next, the principle of the above-mentioned various function options for implementing the voice call function and the display manner of the various function options will be described, referring to the following function options 1 to 5.
Function option 1, object display option.
In some embodiments, the voice call toolbar includes an object display option, and the terminal is capable of displaying object icons of a plurality of participating objects of the first voice call in response to a triggering operation of the object display option.
In some embodiments, the triggering operation for the object display option refers to the description of the triggering operation in step 202, which is not described herein again.
In some embodiments, the object icon of the participant object includes information such as the name and avatar of the participant object to identify the participant object, and in other embodiments, the object icon further includes a microphone identification indicating whether the participant object has a microphone turned on, i.e., is in a state of being utterable.
To facilitate understanding, the present disclosure provides a schematic diagram of an object display option, see fig. 8, wherein reference may be made to fig. 7 for a description of the voice call toolbar; the display elements of the object display option 801 include an avatar of the object currently speaking and the number of people participating in the first voice call "4"; in response to a trigger operation 802 to the object display option 801, object icons 803 of a participating object a, a participating object B, and a participating object C are displayed; the microphone identifications 804 of the participating object a, the participating object B, and the participating object C are also displayed; wherein, the participating object A and the participating object C are in a state of being capable of speaking, and the participating object B is in a state of not speaking; an object icon 805 of the participating object C that is speaking is displayed above the voice call toolbar. As will be appreciated, this 805 is used to indicate that the participating object C is speaking without triggering the object display option.
Through the technical scheme, the function of viewing the information of each participant in real time is provided for the object carrying out the voice call based on the shared document, so that each object can conveniently know the change and the speaking state of the participants in the voice call in time, the efficiency of carrying out voice communication and cooperation based on the shared document is further improved, and the human-computer interaction efficiency is further improved greatly.
Function option 2, invite option.
In some embodiments, the voice call toolbar includes an invitation option, and the terminal is capable of displaying, in response to a triggering operation on the invitation option, address information of the first shared document and an authority setting option of the object to be invited for the first shared document, and further sending, based on the setting operation on the authority setting option and the address information, an invitation request to the object to be invited, where the invitation request is used to invite the object to be invited to join the first voice call of the first shared document.
In some embodiments, the triggering operation for the invitation option refers to the description of the triggering operation in step 202, and is not described herein again.
In some embodiments, the address information may be a document link of the first shared document, the document link pointing to a browse page of the first shared document. The document link is sent to the object to be invited, so that the object to be invited can directly jump to the first shared document, and the object to be invited automatically determines whether to join the first voice call based on the first shared document.
In other embodiments, the address information may be a call link for the first voice call, the call link pointing to a page of the first voice call. And sending the call link to the object to be invited so that the object to be invited can directly join the first voice call.
In some embodiments, the plurality of permissions provided by the permission setting option can be divided based on the document services (i.e., functions) provided by the first shared document. In some embodiments, the first shared document provides editing, browsing, and commenting functions, and the permission setting option can provide editing permission, browsing permission, and commenting permission, and the like for the first shared document.
In other embodiments, the plurality of permissions provided by the permission setting option can be determined based on the permissions of the initiating object (or participating object) on the first shared document. For example, if the initiating object has the editing right and the comment right to the first shared document, the initiating object can select to set the editing right and/or the comment right as the right of the object to be invited to the first shared document in the right setting option.
In some embodiments, the setting operation is a selection operation of at least one of the plurality of rights of the rights setting option, that is, the setting operation may be a multiple selection operation.
In some embodiments, the invitation option only provides the designated one of the originating object and the participating object of the first voice call. In some embodiments, the specified object is an object that possesses editing rights for the first shared document; in other embodiments, the designated object is an object in a designated group, for example, a creation object group of the first shared document, which is not limited in this disclosure.
In other embodiments, the object to be invited may be an object in a target group, for example, an object in a group of objects that possesses target rights for the first shared document. In such an example, the rights set based on the rights setting option can be superimposed on the target rights as the rights of the object to be invited to the first shared document.
In some embodiments, the invitation request indicates a document title of the first shared document to indicate to the object to be invited the shared document in question by the voice call to be entered.
In order to facilitate understanding of the invitation option and the display mode of the permission setting option, the present disclosure provides a schematic diagram of the invitation option, see fig. 9, where the terminal responds to a trigger operation on the invitation option 901 in the voice call toolbar to display the permission setting option 902, a prompt message "can join the voice call by sharing the following information" is displayed in the permission setting option 902, and address information "XXX … …" of the first shared document is displayed in the permission setting option 902; the permissions setting option 902 provides multiple permissions "browsable, commendable editable and no permissions" for the first shared document; in response to the setting operation 903 for this authority setting option 902, the currently determined authority is "browsable", in which a copy button 904 for copying this address information to generate an invitation instruction is also displayed.
Through the technical scheme, a convenient invitation function is provided for the objects carrying out the voice call based on the shared document, so that the shared document and the corresponding voice call can be shared by all the objects in real time according to communication requirements, rich authority setting modes are provided, the efficiency of carrying out the voice communication and cooperation based on the shared document is further improved, and the human-computer interaction efficiency is greatly improved.
Function option 3, microphone status setting option.
In some embodiments, the voice call toolbar includes a microphone state setting option, and the terminal can set the microphone of the local device to a corresponding state in response to a triggering operation of the microphone state setting option.
Wherein, the local terminal device is also the terminal.
In some embodiments, the triggering operation of the microphone state setting option refers to the description of the triggering operation in step 202, which is not described herein again.
In the embodiment of the disclosure, under the condition that the microphone is in an open state, the terminal responds to the triggering operation of the microphone state setting option and switches the microphone into a closed state; under the condition that the microphone is in a closed state, the terminal responds to the triggering operation of the microphone state setting option and switches the microphone to an open state; if the microphone is in an open state, the object corresponding to the local terminal equipment is in a state of being capable of speaking; and if the microphone is in a closed state, the object corresponding to the local terminal equipment is in a non-speaking state.
An exemplary diagram of a microphone setting option is provided in the embodiment of the present disclosure, referring to fig. 10, where a terminal sets a microphone to a response state in response to a trigger operation on a microphone state setting option 1001 in the voice call toolbar; referring to fig. 10 (a), the microphone is in an open state; referring to fig. 10 (b), the microphone is in a closed state.
Through the technical scheme, a convenient microphone opening and closing function is provided for the object carrying out the voice call based on the shared document, so that each object can adjust the opening and closing state of the current microphone in real time according to the communication requirement, the efficiency of carrying out the voice communication and the cooperation based on the shared document is further improved, and the human-computer interaction efficiency is greatly improved.
Function option 4, audio device setup option.
In some embodiments, the voice call toolbar includes an audio device setting option, and the terminal is capable of setting an audio device used by the first voice call on the local device in response to a setting operation of the audio device setting option.
In some embodiments, the audio device includes an audio input device and an audio output device. The audio input device may be a microphone and the audio output device may be a speaker. In some embodiments, the audio device may function as both an audio input device and an audio output device, such as a headset with a microphone.
In some embodiments, the audio setting option is for selecting among a plurality of audio devices provided by the terminal. In some embodiments, the audio device includes an audio input device and an audio output device, the audio setting options providing selection functionality for the audio input device and the audio output device, respectively, in different drop down bars. Illustratively, the terminal can display a plurality of selectable microphone lines in a drop-down bar corresponding to the audio input device and a plurality of selectable speaker lines in a drop-down bar corresponding to the audio output device; the setting operation for the audio device setting option may be a selected operation for any of the speaker lines and/or microphone lines in a drop-down bar provided for the audio setting option.
The disclosed embodiment provides a schematic diagram of audio device setting options, see fig. 11, where the terminal displays a setting panel 1102 of the audio device in response to a trigger operation on an audio device setting option 1101 in the voice call toolbar, where for an audio input device, a microphone line 1 is selected; for the audio output device, of the speaker line 1, the speaker line 2, and the speaker line 3 provided in the lower column, the speaker line 1 is selected.
Through the technical scheme, rich audio equipment setting functions are provided for the objects carrying out voice call based on the shared document, so that each object can adjust the line of the audio equipment in real time according to the configuration of the equipment, the efficiency of carrying out voice communication and cooperation based on the shared document is improved, and the human-computer interaction efficiency is greatly improved.
Function option 5, call end option.
In some embodiments, the voice call toolbar includes an end-of-call option, and the terminal is capable of ending the first voice call in response to a triggering operation of the end-of-call option.
In some embodiments, only the originating object of the first voice call is able to end the first voice call by triggering the call end option.
The embodiment of the present disclosure provides a schematic diagram of a call ending option, see fig. 12, where in response to a trigger operation on a call ending option 1201 in the voice call toolbar, the terminal displays a prompt message 1202, "after exiting, the voice call will end, and confirms to end the voice call," and in a case where the "confirm" option is selected, ends the first voice call.
Through the technical scheme, in the process of carrying out voice call based on the shared document, aiming at various communication requirements, adjustment requirements and cooperation requirements in the voice call process, convenient function entries can be provided, rich voice call functions are provided in the shared document in a simple and efficient display mode, and the man-machine interaction efficiency is greatly improved.
In step 405, during the process of the first voice call, the terminal displays a view frame in the first shared document based on a document browsing view of an initiating object of the first voice call, where the view frame is used to indicate a document area browsed by the initiating object.
In some embodiments, the document browsing perspective is used to provide a real-time usage status of the first shared document by the originating object. The document browsing viewing angle refers to a document browsing progress or a focused document region (for example, a document region displayed on a terminal of an initiating object), and the like, wherein the document region may be represented in a coordinate form.
In some embodiments, after the initiating object initiates the voice call based on the first shared document on the terminal, in the voice call process, the terminal synchronizes the document browsing view angle of the initiating object to the terminal participating in the voice call, so as to improve the efficiency of the voice call. Of course, the document browsing view angle may further include a document area where the editing operation of the initiating object on the first shared document is located, so that the participating object can see the editing operation of the initiating object in time, and the efficiency of the voice call is further improved. For the participating object in the view following state, the terminal of the initiating object can synchronize the view of browsing the document, and for the participating object which is not in or exits the view following state, the terminal of the initiating object can stop synchronizing the view of browsing the document, so as to reduce signaling interaction and ensure the stability of document sharing.
In the embodiment of the present disclosure, based on the document browsing perspective, the real-time use status of the first shared document by the initiating object can be synchronously displayed in the voice call process, so as to achieve the purpose of performing collaboration based on the shared document.
In some embodiments, the view angle frame can be displayed based on a variety of display elements, such as, for example, color lines or a flashing effect, which is not limited by the present disclosure.
In the embodiment of the disclosure, the document area browsed by the initiating object can be clearly indicated through the view angle frame, and the communication cost among a plurality of objects is reduced, so that the communication efficiency based on the shared document in the voice call process is improved, and the human-computer interaction efficiency is improved.
Step 406, in the process of the first voice call, the terminal displays, in the first shared document, view following information of at least one participant of the first voice call, where the view following information is used to indicate whether the participant is following a document browsing view of an initiating object of the first voice call.
In some embodiments, the terminal displays, in the first shared document, perspective following information of at least one participating object of the first voice call and a following control option for setting a following state of the participating object. Based on the following information, the initiating object can acquire the following state of each participating object in the voice call in real time through the visual angle following information, and therefore the following state of the participating object is set through the following control option according to the communication requirement, and the efficiency of voice call based on the shared document is kept.
In this disclosure, the terminal further displays view following information of the initiating object, where the view following information of the initiating object is used to indicate the number of participating objects that currently follow the document browsing view of the initiating object.
In some embodiments, the process of the initiating object setting the following state of the participating object may include case 1 and case 2 described below.
And 1, if the view following information of the participating object is in a non-following state and the following control option is displayed as an opening function, the terminal responds to the triggering operation of the following control option and controls the participating object to follow the document browsing view of the initiating object.
In some embodiments, for the above case 1, in response to the trigger operation on the follow-up control option, the terminal can set the follow-up state of any participating object in time when the participating object does not follow the document browsing perspective of the initiating object, so as to ensure the consistency of each object to the shared document browsing perspective, and effectively ensure the accuracy of information transmission between each object. By the efficient management mode, the communication efficiency in the process of carrying out voice call based on the shared document is effectively improved.
And 2, if the view angle following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and controlling the participating object to exit from following the document browsing view angle of the initiating object.
In some embodiments, for the above case 2, in response to the triggering operation on the follow-up control option, the terminal can set the follow-up state of any participating object according to a requirement when the participating object is following the document browsing perspective of the initiating object, so as to flexibly adjust the shared document browsing perspective of each object in various communication scenarios, thereby further ensuring flexibility of information transmission between each object.
Through the technical scheme, the method of following the document browsing visual angle can effectively keep the accuracy of information transmission and fully ensure the free use of each participating object to the shared document in the voice call process, and effectively improves the communication efficiency of voice call based on the shared document and the human-computer interaction efficiency greatly by using an efficient cooperation mode.
To facilitate understanding, the present disclosure provides a schematic illustration of perspective follow-up information and follow-up control options, see fig. 13, where perspective follow-up information 1310 "follow presenter perspective" for a participant object indicates that the participant object is in a follow-up state, and follow-up control options 1311 for the participant object are displayed as exit functions; the view following information 1320 "not following the presenter view" of the participating object indicates that the participating object is in an unrelieved state, and the following control option 1321 of the participating object is displayed as an on function; the perspective following information 1330 for the initiating object is "you are the conference host, 3 are following your document browsing perspective".
In some embodiments, in the terminal where the participating object is located, the display effect of the view-following information of the participating object can be changed as the following state of the participating object to the initiating object changes. Illustratively, the perspective follow-up information (see the icon shown at 1310 in fig. 13) of the participating object can be changed from color to gray in the process of changing from the follow-up state to the no-follow state in response to the triggering operation of the follow-up control option by the participating object. Accordingly, the view angle following information (see the icon shown at 1310 in fig. 13) of the participating object can be changed from gray to color during the change from the non-following state to the following state.
In step 407, during the first voice call, the terminal displays a cursor of an initiating object of the first voice call and displays a cursor of a following object of at least one participating object of the first voice call in the first shared document.
The cursor is used for indicating the document content aimed at by the object in the first shared document. In some embodiments, the terminal determines a position of a cursor in the first shared document based on an input device, e.g., from a pointer file of a mouse; in some embodiments, the terminal determines the position of the cursor in response to a trigger operation on its screen.
Wherein, the following object refers to the participating object of the document browsing perspective following the initiating object.
In some embodiments, a cursor of the initiating object of the first voice call is displayed in the terminal where the initiating object is located; displaying the cursor of the initiating object and the cursor of the following object in the terminal where the following object is located;
in other embodiments, a cursor of the initiating object and a cursor of the following object of the first voice call are displayed in the terminal where the initiating object is located, so that content positions of multiple objects in the first shared document can be displayed in the terminal of the initiating object, a function of multi-object interaction is realized in the content of the shared document, and efficiency of communication and cooperation of the objects in a voice call process based on the shared document is further improved.
In some embodiments, the cursor of the initiating object and the view-following information of the following object can be displayed based on the associated display elements. Illustratively, the cursor of the initiating object and the view-following information of the following object can be displayed in the same or similar following theme color or theme effect. In other embodiments, the subject-following color can vary randomly among multiple colors, which is not limited by this disclosure.
In other embodiments, the cursor of the following object can be displayed based on a different color than the cursor of the initiating object to distinguish the indication of the first shared document by different objects.
The disclosed embodiment provides a schematic diagram of the display effect of a cursor and view angle following information, see fig. 14, wherein, in a display effect example 1401, view angle following information 1402 of a following object is displayed with a cursor 1403 of an initiating object and an object icon 1404 (of a participating object or initiating object) based on a black following subject color 1405; in the display effect example 1405, the view angle following information 1406 of the following object is displayed with the cursor 1407 of the initiating object and the object icon 1408 (of the participating object or the initiating object) based on the following subject color of color (indicated with diagonal fill).
Further, based on the above steps 404 to 407, the present disclosure provides a schematic diagram of a shared document in a voice call process, see fig. 15, where fig. 15 is a display effect of a terminal where an initiating object is located; wherein, a viewing angle frame 1501 (indicated by a black bold frame line) is displayed in the first shared document; the first shared document shows view-following information 1502 of the initiating object; also displayed in the first shared document is a voice call toolbar 1503 (refer to the description of step 404); also displayed in the first shared document is a cursor 1504 for the initiating object.
The present disclosure provides another schematic diagram of a shared document during a voice call, referring to fig. 16, where fig. 16 is a display effect of a terminal where a participating object is located; wherein, a viewing angle frame 1601 (represented by a black bold frame line) is displayed in the first shared document; view following information 1602 for the participating object is displayed in the first shared document, the 1602 indicating that the participating object follows the document browsing view of the initiating object; a voice call toolbar 1603 is also displayed in the first shared document (refer to the description of step 404); also displayed in the first shared document is a cursor 1604 of the initiating object and a cursor 1605 of the participating object.
The display modes shown in fig. 7 to 16 can be applied to a PC terminal device, and in other embodiments, in the case that the terminal is a mobile terminal device, the display modes provided in fig. 17 to 22 can also be implemented. Note that the display modes shown in fig. 7 to 16 can be applied to mobile-side devices, and the display modes shown in fig. 17 to 22 can be applied to PC-side devices, but the present disclosure is not limited thereto.
The present disclosure provides a schematic illustration of a voice call toolbar, see fig. 17, the voice call toolbar 1700 including a speaking object icon 1701, an invite option 1702, a microphone state setting option 1703, an audio device setting option 1704, and an end of call option 1705, the speaking object icon 1701 indicating an object name "object AA" that is currently speaking; wherein the 1706 is view following information of the initiating object.
The present disclosure provides an illustration of invitation options and object display options, see fig. 18, wherein reference to the voice call toolbar may be had to fig. 16; in response to the triggering operation of the invite option 1801, the voice call toolbar jumps to an invite panel, an object display option 1803 is displayed above the invite panel 1802, and display elements of the object display option 1803 include a head portrait of an object currently speaking and the number of people participating in the first voice call "4"; in response to a triggering operation on the object display option 1803, jumping to the object icons 1804 displaying the participation object a, the participation object B, and the participation object C and the microphone identifications 1805 of the participation object a, the participation object B, and the participation object C; wherein, the participating object A and the participating object C are in a state of being capable of speaking, and the participating object B is in a state of not speaking; the invitation panel displays an authority setting option 1806, a prompt message "can join a voice call by sharing the following information" is displayed in the authority setting option 1806, and address information "XXX … …" of the first shared document is displayed in the authority setting option 1806; the permission setting option 1806 provides various permissions "browsable, commendable, editable" for the first shared document, a copy button 1807 for copying the address information to generate an invitation instruction is also displayed in the invitation panel, and the view angle of the initiating object is followed by information 1808.
An exemplary diagram of microphone setting options is provided in the embodiment of the present disclosure, see fig. 19, where the terminal sets the microphone to a response state in response to a trigger operation on the microphone state setting option 1901 in the voice call toolbar; referring to fig. 19 (a), the microphone is in an open state; referring to fig. 19 (b), the microphone is in a closed state.
The disclosed embodiment provides a schematic diagram of audio device setting options, see fig. 20, in which a terminal displays a setting panel 2002 of an audio device in response to a trigger operation on an audio device setting option 2001 in the voice call toolbar, wherein, for an audio input device, a microphone line 1 and a microphone line 2 are provided in the setting panel 2002, and the microphone line 1 is selected; for the audio output device, the setting panel 2002 is provided with speaker line 1 and speaker line 2, and speaker line 1 is selected; this 2003 is the view following information of the initiating object.
The disclosed embodiment provides a schematic diagram of a call ending option, see fig. 21, where the terminal responds to a trigger operation on a call ending option 2101 in the voice call toolbar to display a prompt message 2102 "after exiting, the voice call will end and the voice call is confirmed to end", and in case of selecting the "confirm" option, the first voice call is ended; this 2103 is the perspective following information of the initiating object.
The present disclosure provides a schematic diagram of a shared document in a voice call process, referring to fig. 22, where (a) in fig. 22 is a display effect of a terminal where an initiating object is located; wherein, the first shared document displays the view following information 2201 of the initiating object; a voice call tool bar 2202 (refer to the description of step 404) is also displayed in the first shared document; referring to fig. 22 (b), the display effect of the terminal where the participant is located is shown; wherein view following information and following control options 2203 for the participating object are displayed in the first shared document, the 2203 indicating a document browsing view at which the participating object follows the initiating object; also displayed in the first shared document is a voice call toolbar 2204 (described with reference to step 404).
The above-mentioned embodiment corresponding to fig. 4 describes the process of making a voice call based on a shared document from the perspective of an initiating object, and next, by some embodiments, details the process of joining an ongoing voice call based on a shared document from the perspective of a participating object. Fig. 23 is a flowchart illustrating a voice call method of sharing a document, which is performed by a terminal as illustrated in fig. 23, according to an exemplary embodiment, and includes the following steps 2301 to 2306.
Step 2301, the terminal displays a voice call option of the second shared document based on the second shared document, wherein the voice call option indicates that the multiple objects of the second shared document are performing a second voice call.
In some embodiments, the terminal can display the voice call option based on the second shared document in different manners, referring to the following display manner one and display manner two.
And in the first display mode, the terminal displays the voice call option of the second shared document on the second shared document.
In some embodiments, the terminal provides a functional portal for an object browsing the second shared document to join the second voice call directly by displaying a voice call option on the second shared document. In some embodiments, the voice call option can also indicate a number of objects that are engaged in a second voice call. For easy understanding, the present disclosure provides a schematic diagram of a voice call option, see fig. 23, where the terminal displays the second shared document 2401 (the document title and the document content are shown in the drawing), a voice call option 2402 is displayed in a button bar above the second shared document 2401, and the rest of icons in the interface refer to the description in fig. 5, which is not described herein again.
Displaying a voice call identifier on a document tag of a second shared document in the shared document list by the terminal in a second display mode; and displaying a voice call option of the second shared document based on the triggering operation of the document tag, wherein the voice call identification indicates that a plurality of objects of the second shared document are carrying out a second voice call.
Wherein, the shared document list arranges a plurality of shared documents in the form of document tags. In some embodiments, the shared document list provides a functional entry to jump to a shared document in the form of a document tag.
In some embodiments, the document tag in the shared document list displays the directory information of the shared document layer by layer in a directory tree form, and the terminal displays the voice call identifier in the top-level tag corresponding to the root directory of the second shared document, so that the voice call identifier can be ensured to be directly displayed in the shared document list, thereby efficiently indicating the shared document in which a voice call is in progress, and without needing to check whether the second shared document is currently in progress through more operations.
In some embodiments, the terminal can implement the process of displaying the voice call option of the second shared document based on the triggering operation on the document tag in various ways, that is, the terminal can provide various functional entries for joining a voice call based on the document tag. Illustratively, the terminal can respond to the triggering operation of the document tag and display a functional interface of the second shared document, wherein the functional interface comprises a voice call icon and a jump icon, the voice call icon is used for providing the voice call option, and the jump icon is used for jumping to the second shared document.
Function entry 1, voice call icon.
In some embodiments, the terminal can provide a voice call icon directly in the functional interface, the voice call icon can be indicating that the second voice call is ongoing, and provide a voice call option for joining the second voice call. Based on the method, a function entrance for displaying the shared document by one key and joining the voice call can be provided for the participating object, so that the operation required for joining the voice call is reduced, and the efficiency of carrying out the voice call is improved.
Function entry 2, jump icon.
Wherein the jump icon is used to jump to the second shared document.
In some embodiments, the terminal can display the second shared document in response to the triggering operation of the jump icon, and further display a voice call option on the second shared document based on the second shared document (the same as the display mode). Based on the method, a separation optional mode for browsing the shared document and joining the voice call can be provided for the participating object, and the flexibility of performing the voice call based on the shared document is further improved.
In order to facilitate understanding of the second display mode, the present disclosure provides another schematic diagram of voice call options, see fig. 25, wherein in the shared document list 2501, a plurality of document tags are arranged and displayed, wherein in a top-level tag 2502 of a document tag 3 of a second shared document which is performing a voice call, a voice call identifier 2503 is displayed, and the voice call identifier 2503 indicates that a shared document is performing a voice call under the top-level tag 2502; in response to a trigger operation on the top-level tab 2502 of the document tab, a function panel 2504 is displayed, the function panel 2504 including a document title of the second shared document, a voice call icon 2505 for providing voice call options, and a skip icon 2506.
The display mode provided in fig. 26 can be applied to a PC terminal device, and in other embodiments, for a case where the terminal is a mobile terminal device, all or part of the functions provided in the display mode two can be implemented by the mode provided in fig. 26. The present disclosure provides another schematic diagram of voice call options, see fig. 26, wherein in the shared document list 2601, a plurality of document tags are arranged and displayed, wherein in a top-level tag 2602 of a document tag of a second shared document which is performing a voice call, a voice call identifier 2603 is displayed, and the voice call identifier 2603 indicates that a shared document is performing a voice call under the top-level tag 2602; a function panel 2604 is displayed in response to a trigger operation to the top-level tab 2602 of the document tab, the function panel 2604 including a document title of the second shared document, and a voice call icon 2605 for providing a voice call option is displayed in response to a trigger operation to the function panel 2604.
It should be noted that, in some embodiments, the display method provided in fig. 25 can also be applied to a mobile terminal device, and the display method provided in fig. 26 can also be applied to a PC terminal device, which is not limited in this disclosure.
And step 2302, the terminal responds to the triggering operation of the voice call option and joins the second voice call.
The description of the trigger operation refers to the description of the trigger operation in step 202, and is not repeated herein.
In some embodiments, the terminal displays the voice call option based on the function portal 1 in the second display mode, and in this example, the terminal can display the second shared document and join the second voice call in response to the triggering operation of the voice call icon.
Step 2303, during the process of the second voice call, the terminal displays a voice call toolbar at the target position of the second shared document, wherein the voice call toolbar is used for realizing multiple voice call functions.
In some embodiments, the voice call toolbar includes at least one of an object display option, an invite option, a microphone status setting option, an audio device setting option, and a call end option to provide a plurality of voice call functions based on the plurality of function options. The description of the voice toolbar refers to step 404, and is not described herein. The voice call toolbar includes a call end option for providing a function of pushing out a second voice call for a participating object, and understandably, the participating object is different from an initiating object which is a management role of the second voice call, and the participating object exits the voice call and does not cause the voice call to end. In this example, the terminal where the participating object is located in the embodiment of the present disclosure can exit the second voice call in response to the triggering operation of the call end option.
In some embodiments, the object icons of the initiating object and the participating objects of the second voice call are displayed differently to distinguish the initiating object sharing the document viewing perspective from the participating objects following the document viewing perspective of the initiating object.
Step 2304, the terminal displays the second shared document based on the document browsing view of the initiating object of the second voice call.
In some embodiments, the terminal displays a view frame in the second shared document based on a document browsing view of the initiating object of the second voice call, where the view frame is used to indicate a document area browsed by the initiating object. For the related introduction of the frame of the view angle and the document browsing view angle, refer to step 405, which is not described herein again.
In some embodiments, the terminal defaults to a document browsing perspective that follows the originating object when joining the second voice call.
In some embodiments, in the case that the participating object follows the document browsing perspective of the initiating object, the terminal is further capable of displaying a document area of the second shared document browsed by the participating object itself in a display area outside the perspective border. In other embodiments, the terminal may further be capable of displaying, in a display area outside the view angle frame, other shared documents browsed by the participating object, which is not limited by this disclosure.
Step 2305, the terminal displays the view following information and following control options of the participating object of the local terminal in the second shared document, and the following control options are used for setting the following state of the participating object.
The home terminal is also the terminal where the participating object is located.
In some embodiments, the process of setting the following state with reference to the object may include case 1 and case 2 described below.
And 1, if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an opening function, the terminal can respond to the triggering operation of the following control option and follow the document browsing view angle of the initiating object.
And 2, if the view following information of the participating object is in a following state and the following control option is displayed as a quitting function, the terminal can respond to the triggering operation of the following control option and quit following the document browsing view of the initiating object.
In the embodiment of the present disclosure, reference is made to step 406 and fig. 13 for a display mode of the view following information and the following control option of the participating object, which is not described herein again.
Through the technical scheme, the terminal responds to the triggering operation of the following control option, and can provide a function of setting the following state aiming at the initiating object for any participating object, so that the participating objects can freely switch the document browsing visual angle, the controllability of each object to the shared document browsing visual angle is ensured, each object in the voice call can freely use the shared document while carrying out the voice call, and the man-machine interaction efficiency is effectively improved.
In step 2306, during the second voice call, the terminal displays the cursor of the initiating object of the second voice call and the cursor of the participating object in the second shared document.
This step refers to step 407.
In some embodiments, the terminal displays the cursor of the initiating object and the cursor of the participating object in the second shared document, and can display the content position of the initiating object in the second shared document for the participating object at the terminal where the participating object is located, so that a multi-object interaction function is realized in the content of the shared document, and the efficiency of communication and cooperation of each object in the process of carrying out voice call based on the shared document is further improved.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document while carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
Furthermore, a plurality of functional entries for joining the voice call are provided for the participating objects, various scenes for carrying out the voice call based on the shared document are fully covered, and the man-machine interaction efficiency is further improved.
FIG. 27 is a block diagram of a voice call device sharing a document shown in accordance with an exemplary embodiment. Referring to fig. 27, the apparatus includes:
a display unit 2701 configured to execute displaying a call initiation option on a first shared document for providing a document service for a plurality of objects;
an initiating unit 2702 configured to execute a triggering operation responding to the call initiating option, and initiate a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
a call unit 2703 configured to perform a first voice call in a case where any of the target objects accepts the voice call request.
In one possible embodiment, the initiating unit 2702 is configured to perform:
responding to the triggering operation of the call initiating option, and initiating a voice call request to all target objects of the first shared document;
or the like, or, alternatively,
and responding to the triggering operation of the call initiating option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and responding to the selection operation of a part of target objects in all the target objects, and initiating the voice call request to the part of target objects.
In one possible embodiment, the voice call device for sharing a document further includes:
and the tool display unit is configured to display a voice call toolbar at a target position of the first shared document in the process of the first voice call, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
In one possible embodiment, the voice call device for sharing a document further includes:
and an utterance display unit that displays an object icon that is uttered in the first voice call at a designated position of the voice call toolbar.
In one possible embodiment, the voice call toolbar includes an object display option, and the voice call apparatus for sharing a document further includes:
and an object display unit configured to perform an object icon displaying the plurality of participation objects of the first voice call in response to a trigger operation on the object display option.
In one possible implementation, the voice call toolbar includes an invite option, and the voice call apparatus for sharing a document further includes:
the invitation unit is configured to execute triggering operation responding to the invitation option and display the address information of the first shared document and the permission setting option of the object to be invited aiming at the first shared document;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
In one possible embodiment, the voice call toolbar includes a microphone status setting option, and the voice call apparatus for sharing documents further includes:
and the microphone state setting unit is configured to execute triggering operation responding to the microphone state setting option and set the microphone of the local terminal equipment to be in a corresponding state.
In one possible implementation, the voice call toolbar includes audio device setting options, and the voice call apparatus for sharing a document further includes:
and the audio device setting unit is configured to execute setting operation responding to the audio device setting option and set the audio device adopted by the first voice call on the local terminal device.
In one possible embodiment, the voice call toolbar includes a call end option, and the voice call apparatus for sharing a document further includes:
and the ending unit is configured to execute triggering operation responding to the call ending option and end the first voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
and the visual angle display unit is configured to display visual angle following information of at least one participant object of the first voice call in the first shared document, wherein the visual angle following information is used for indicating whether the participant object follows a document browsing visual angle of an initiating object of the first voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
a visual angle control unit configured to perform displaying visual angle following information and following control options of at least one participant of the first voice call in the first shared document, the following control options being used for setting a following state of the participant;
if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and controlling the participating object to follow the document browsing view angle of the initiating object;
and if the view angle following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and controlling the participating object to exit the following of the document browsing view angle of the initiating object.
In one possible embodiment, the voice call device for sharing a document further includes:
and the frame display unit is configured to execute a document browsing view angle based on the initiating object of the first voice call, and display a view angle frame in the first shared document, wherein the view angle frame is used for indicating a document area browsed by the initiating object.
In one possible embodiment, the voice call device for sharing a document further includes:
and the cursor display unit is configured to display a cursor of an initiating object of the first voice call and display a cursor of a following object in a participating object of the first voice call in the first shared document.
In one possible embodiment, the object icons of the initiating object and the participating object of the first voice call are displayed differently.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document while carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
It should be noted that: in the voice call apparatus for sharing a document provided in the above embodiment, when the corresponding steps are executed, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the voice call device for sharing a document and the voice call method embodiment for sharing a document provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
FIG. 28 is a block diagram illustrating a voice communicator sharing a document according to an exemplary embodiment. Referring to fig. 28, the apparatus includes:
a display unit 2801 configured to perform displaying, based on the second shared document, a voice call option of the second shared document indicating that a plurality of objects of the second shared document are making a second voice call;
a join call unit 2802 configured to perform a join the second voice call in response to the triggering operation of the voice call option.
In one possible implementation, the display unit 2801, includes:
a first display module configured to execute on the second shared document, displaying a voice call option of the second shared document.
In one possible embodiment, the display unit 2801 includes:
a second display module configured to execute displaying a voice call identifier on a document tag of the second shared document in the shared document list, the voice call identifier indicating that a plurality of objects of the second shared document are in a second voice call;
and displaying a voice call option of the second shared document based on the triggering operation of the document tag.
In one possible implementation, the second display module is configured to perform:
responding to the triggering operation of the document label, displaying a function interface of the second shared document, wherein the function interface comprises a voice call icon and a jump icon, the voice call icon is used for providing the voice call option, and the jump icon is used for jumping to the second shared document;
responding to the triggering operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the call joining module 2802 is configured to perform:
and responding to the triggering operation of the voice call icon, displaying the second shared document and joining the second voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
and the tool display unit is configured to display a voice call tool bar at the target position of the second shared document in the process of the second voice call, wherein the voice call tool bar is used for realizing a plurality of voice call functions.
In one possible embodiment, the voice call toolbar includes an end-of-call option, the apparatus further comprising:
and the exit unit is configured to execute the operation of responding to the trigger operation of the call ending option and exit the second voice call.
In one possible embodiment, the voice call device for sharing a document further includes:
and a viewing angle display unit configured to perform a document browsing viewing angle based on the originating object of the second voice call, and display the second shared document.
In one possible embodiment, the voice call device for sharing a document further includes:
a view angle control unit configured to execute displaying, in the second shared document, view angle following information of a participating object of a home terminal and a following control option for setting a following state of the participating object;
if the view following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and following the document browsing view of the initiating object;
and if the view following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view of the initiating object.
In one possible embodiment, the object icons of the initiating object and the participating object of the second voice call are displayed differently.
Through the technical scheme, the plurality of objects can carry out voice call based on the shared document, so that each object in the voice call can freely use the shared document while carrying out voice call, frequent switching between the shared document and the voice call is not needed, and the man-machine interaction efficiency is effectively improved.
It should be noted that: in the voice call device for sharing a document according to the foregoing embodiment, when executing corresponding steps, the division of each functional module is only used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the voice call device for sharing a document and the voice call method embodiment for sharing a document provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
In an embodiment of the present disclosure, an electronic device is further provided, where the electronic device includes a processor and a memory, where the memory is used to store at least one computer program, and the at least one computer program is loaded and executed by the processor to implement the above-mentioned method for sharing a document by voice call. The electronic device can be implemented as the terminal. Fig. 29 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment, and referring to fig. 29, a terminal 2900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 2900 might also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
Generally, the terminal 2900 includes: a processor 2901, and a memory 2902.
The processor 2901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 2901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2901 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2901 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 2901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 2902 may include one or more computer-readable storage media, which may be non-transitory. Memory 2902 can also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 2902 is used to store at least one program code for execution by the processor 2901 to implement processes performed by the terminals in the voice call method for sharing documents provided by method embodiments in the present disclosure.
In some embodiments, the terminal 2900 may also optionally include: a peripheral interface 2903 and at least one peripheral. The processor 2901, memory 2902, and peripheral interface 2903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 2903 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2904, a display 2905, a camera assembly 2906, an audio circuit 2907, a positioning assembly 2908, and a power source 2909.
Peripheral interface 2903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to processor 2901 and memory 2902. In some embodiments, processor 2901, memory 2902, and peripheral interface 2903 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 2901, the memory 2902, and the peripheral interface 2903 can be implemented on separate chips or circuit boards, which are not limited by embodiments of the disclosure.
The Radio Frequency circuit 2904 is used for receiving and transmitting RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 2904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. In some embodiments, radio frequency circuitry 2904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Radio frequency circuitry 2904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 2904 may also include NFC (Near Field Communication) related circuitry, which the present disclosure does not limit.
The display screen 2905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2905 is a touch display, the display 2905 also has the ability to capture touch signals on or over the surface of the display 2905. The touch signal may be input to the processor 2901 as a control signal for processing. At this point, display 2905 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 2905 may be one, disposed on a front panel of the terminal 2900; in other embodiments, the display 2905 may be at least two, each disposed on a different surface of the terminal 2900 or in a folded design; in other embodiments, the display 2905 can be a flexible display disposed on a curved surface or on a folded surface of the terminal 2900. Even further, the display 2905 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 2905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
Camera assembly 2906 is used to capture images or video. In some embodiments, camera assembly 2906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and can be used for light compensation under different color temperatures.
The audio circuitry 2907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2901 for processing, or inputting the electric signals to the radio frequency circuit 2904 for realizing voice communication. The microphones may be provided in a plurality, respectively, at different locations of the terminal 2900 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2901 or the radio frequency circuit 2904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 2907 may also include a headphone jack.
The positioning component 2908 is operable to locate a current geographic Location of the terminal 2900 for navigation or LBS (Location Based Service).
A power supply 2909 is used to power the various components within the terminal 2900. The power source 2909 may be alternating current, direct current, disposable or rechargeable. When the power source 2909 includes a rechargeable battery, the rechargeable battery can support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 2900 also includes one or more sensors 2910. The one or more sensors 2910 include, but are not limited to: an acceleration sensor 2911, a gyro sensor 2912, a pressure sensor 2913, a fingerprint sensor 2914, an optical sensor 2915, and a proximity sensor 2916.
The acceleration sensor 2911 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 2900. For example, the acceleration sensor 2911 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 2901 may control the display screen 2905 to display a user page in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2911. The acceleration sensor 2911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2912 may detect a body direction and a rotation angle of the terminal 2900, and the gyro sensor 2912 may collect a 3D motion of the user on the terminal 2900 in cooperation with the acceleration sensor 2911. The processor 2901, based on data collected by the gyro sensor 2912, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 2913 may be disposed on a side frame of the terminal 2900 and/or on a lower layer of the display 2905. When the pressure sensor 2913 is disposed on the side frame of the terminal 2900, a user's holding signal to the terminal 2900 may be detected, and the processor 2901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2913. When the pressure sensor 2913 is disposed at the lower layer of the display 2905, the processor 2901 controls the operability control on the UI page according to the pressure operation of the user on the display 2905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2914 is used to collect a fingerprint of the user, and the processor 2901 identifies the user according to the fingerprint collected by the fingerprint sensor 2914, or the fingerprint sensor 2914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 2901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 2914 may be disposed on the front, back, or side of the terminal 2900. When a physical key or vendor Logo is provided on the terminal 2900, the fingerprint sensor 2914 may be integrated with the physical key or vendor Logo.
The optical sensor 2915 is used to collect the ambient light intensity. In one embodiment, the processor 2901 may control the display brightness of the display screen 2905 based on the ambient light intensity collected by the optical sensor 2915. Specifically, when the ambient light intensity is high, the display brightness of the display screen 2905 is increased; when the ambient light intensity is low, the display brightness of display 2905 is turned down. In another embodiment, the processor 2901 may also dynamically adjust the shooting parameters of the camera assembly 2906 based on the ambient light intensity collected by the optical sensor 2915.
The proximity sensor 2916, also called a distance sensor, is generally provided on the front panel of the terminal 2900. The proximity sensor 2916 is used to collect the distance between the user and the front of the terminal 2900. In one embodiment, the processor 2901 controls the display 2905 to switch from a bright screen state to a dark screen state when the proximity sensor 2916 detects that the distance between the user and the front of the terminal 2900 gradually decreases; when the proximity sensor 2916 detects that the distance between the user and the front surface of the terminal 2900 gradually becomes larger, the processor 2901 controls the display 2905 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 29 is not intended to be limiting of terminal 2900, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium including program code, such as the memory 2902 including program code, executable by the processor 2901 of the terminal 2900 to perform the voice call method for sharing documents is also provided. Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact-Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided that includes one or more instructions for execution by one or more processors of an electronic device to enable the electronic device to perform the above-described method of voice call for sharing a document.
In some embodiments, a computer program according to embodiments of the present disclosure may be deployed to be executed on one computer device or on multiple computer devices located at one site, or on multiple computer devices distributed at multiple sites and interconnected by a communication network, and the multiple computer devices distributed at the multiple sites and interconnected by the communication network may constitute a block chain system.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (28)

1. A method for sharing a document in a voice call, the method comprising:
displaying a call initiation option on a first shared document, wherein the first shared document is used for providing document service for a plurality of objects;
responding to the triggering operation of the call initiating option, and initiating a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
and under the condition that any target object receives the voice call request, carrying out a first voice call.
2. The method for voice call of shared document according to claim 1, wherein the initiating a voice call request based on at least one target object of the first shared document in response to the triggering operation of the call initiation option comprises:
responding to the triggering operation of the call initiating option, and initiating a voice call request to all target objects of the first shared document;
or the like, or, alternatively,
and responding to the triggering operation of the call initiating option, displaying a target object list, wherein the target object list comprises all target objects of the first shared document, and responding to the selection operation of a part of target objects in all the target objects, and initiating the voice call request to the part of target objects.
3. The method of claim 1, further comprising:
and in the process of the first voice call, displaying a voice call toolbar at the target position of the first shared document, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
4. The method of claim 3, further comprising:
and displaying an object icon speaking in the first voice call at a designated position of the voice call toolbar.
5. The voice call method for sharing a document according to claim 3, wherein the voice call toolbar includes an object display option, the method further comprising:
and in response to the triggering operation of the object display option, displaying object icons of a plurality of participation objects of the first voice call.
6. The voice call method for sharing a document according to claim 3, wherein the voice call toolbar includes an invitation option, the method further comprising:
responding to the triggering operation of the invitation option, and displaying the address information of the first shared document and the permission setting option of the object to be invited for the first shared document;
and sending an invitation request to the object to be invited based on the setting operation of the permission setting option and the address information, wherein the invitation request is used for inviting the object to be invited to join the first voice call of the first shared document.
7. The voice call method for sharing a document of claim 3, wherein the voice call toolbar includes a microphone status setting option, the method further comprising:
and responding to the triggering operation of the microphone state setting option, and setting the microphone of the local terminal equipment to be in a corresponding state.
8. The method of sharing a document according to claim 3, wherein the voice call toolbar includes audio device setup options, the method further comprising:
and responding to the setting operation of the audio equipment setting option, and setting the audio equipment adopted by the first voice call on the local terminal equipment.
9. The voice call method for sharing a document of claim 3, wherein the voice call toolbar includes an end-of-call option, the method further comprising:
and in response to the triggering operation of the call ending option, ending the first voice call.
10. The method of claim 1, further comprising:
displaying, in the first shared document, perspective following information of at least one participant of the first voice call, where the perspective following information is used to indicate whether the participant is following a document browsing perspective of an initiator of the first voice call.
11. The method of sharing a document according to claim 10, wherein the method further comprises:
displaying, in the first shared document, perspective following information and following control options of at least one participating object of the first voice call, the following control options being used to set a following state of the participating object;
if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and controlling the participating object to follow the document browsing view angle of the initiating object;
and if the view angle following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and controlling the participating object to exit from following the document browsing view angle of the initiating object.
12. The method of sharing a document according to claim 1, wherein the method further comprises:
and displaying a view angle frame in the first shared document based on a document browsing view angle of an initiating object of the first voice call, wherein the view angle frame is used for indicating a document area browsed by the initiating object.
13. The method of sharing a document according to claim 1, wherein the method further comprises:
and displaying a cursor of an initiating object of the first voice call and a cursor of a following object in a participating object of the first voice call in the first shared document.
14. The voice call method for sharing a document according to claim 1, wherein the object icons of the originating object and the participating object of the first voice call are displayed in different manners.
15. A method for sharing a document in a voice call, the method comprising:
displaying a voice call option of a second shared document based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are in a second voice call;
and responding to the triggering operation of the voice call option, and joining the second voice call.
16. The method for voice call of shared document according to claim 15, wherein the displaying the voice call option of the second shared document based on the second shared document comprises:
displaying, on the second shared document, a voice call option of the second shared document.
17. The method for voice call of shared document according to claim 15, wherein the displaying the voice call option of the second shared document based on the second shared document comprises:
displaying a voice call identifier on a document tag of the second shared document in a shared document list, wherein the voice call identifier indicates that a plurality of objects of the second shared document are carrying out a second voice call;
and displaying a voice call option of the second shared document based on the triggering operation of the document tag.
18. The method for voice call of shared document according to claim 17, wherein the displaying the voice call option of the second shared document based on the triggering operation of the document tag comprises:
responding to the triggering operation of the document tag, displaying a function interface of the second shared document, wherein the function interface comprises a voice call icon and a jump icon, the voice call icon is used for providing the voice call option, and the jump icon is used for jumping to the second shared document;
responding to the trigger operation of the jump icon, displaying the second shared document, and displaying the voice call option on the second shared document;
the joining the second voice call in response to the triggering operation of the voice call option includes:
and responding to the trigger operation of the voice call icon, displaying the second shared document and joining the second voice call.
19. The method of sharing a document according to claim 15, wherein the method further comprises:
and displaying a voice call toolbar at a target position of the second shared document in the process of the second voice call, wherein the voice call toolbar is used for realizing a plurality of voice call functions.
20. The method of claim 19, wherein the voice call toolbar includes a call end option, the method further comprising:
and exiting the second voice call in response to the triggering operation of the call ending option.
21. The voice call method for sharing a document according to claim 15, wherein after the joining of the second voice call in response to the triggering operation of the voice call option, the method further comprises:
and displaying the second shared document based on the document browsing view angle of the initiating object of the second voice call.
22. The method of sharing a document according to claim 15, wherein the method further comprises:
displaying view following information and following control options of a participating object of a local terminal in the second shared document, wherein the following control options are used for setting the following state of the participating object;
if the view angle following information of the participating object is in a non-following state and the following control option is displayed as an opening function, responding to the triggering operation of the following control option, and following the document browsing view angle of the initiating object;
and if the view following information of the participating object is in a following state and the following control option is displayed as an exit function, responding to the triggering operation of the following control option, and exiting the following of the document browsing view of the initiating object.
23. The voice call method for sharing a document according to claim 15, wherein the object icons of the originating object and the participating object of the second voice call are displayed in different manners.
24. A voice call apparatus for sharing a document, the apparatus comprising:
the display unit is configured to execute displaying a call initiating option on a first shared document, wherein the first shared document is used for providing document service for a plurality of objects;
an initiating unit configured to perform a triggering operation in response to the call initiating option, and initiate a voice call request based on at least one target object of the first shared document, wherein the target object is an object using the first shared document;
and the call unit is configured to execute a first voice call under the condition that any target object accepts the voice call request.
25. A voice call apparatus for sharing a document, the apparatus comprising:
a display unit configured to perform displaying a voice call option of a second shared document based on the second shared document, the voice call option indicating that a plurality of objects of the second shared document are making a second voice call;
a join call unit configured to perform a join of the second voice call in response to a trigger operation of the voice call option.
26. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing the processor executable program code;
wherein the processor is configured to execute the program code to implement a voice call method of sharing a document as claimed in any one of claims 1 to 23.
27. A computer-readable storage medium, wherein program code in the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform a voice call method of sharing a document according to any one of claims 1 to 23.
28. A computer program product comprising one or more instructions for execution by one or more processors of an electronic device to enable the electronic device to perform the method of sharing a document of any of claims 1-23.
CN202210976170.8A 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document Active CN115348240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210976170.8A CN115348240B (en) 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210976170.8A CN115348240B (en) 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document

Publications (2)

Publication Number Publication Date
CN115348240A true CN115348240A (en) 2022-11-15
CN115348240B CN115348240B (en) 2023-11-21

Family

ID=83951644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210976170.8A Active CN115348240B (en) 2022-08-15 2022-08-15 Voice call method, device, electronic equipment and storage medium for sharing document

Country Status (1)

Country Link
CN (1) CN115348240B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109669924A (en) * 2018-12-24 2019-04-23 天津字节跳动科技有限公司 Sharing method, device, electronic equipment and the storage medium of online document
CN109800594A (en) * 2018-12-14 2019-05-24 平安普惠企业管理有限公司 Document access authority management method, device and computer equipment
CN109976617A (en) * 2019-04-03 2019-07-05 腾讯科技(深圳)有限公司 Document display method and apparatus
CN111144074A (en) * 2018-11-05 2020-05-12 腾讯科技(深圳)有限公司 Document cooperation method and device, computer readable storage medium and computer equipment
CN114371896A (en) * 2021-12-30 2022-04-19 北京字跳网络技术有限公司 Prompting method, device, equipment and medium based on document sharing
CN114398858A (en) * 2022-01-06 2022-04-26 腾讯科技(深圳)有限公司 Document display method, related device, equipment and storage medium
CN114461580A (en) * 2021-12-23 2022-05-10 北京达佳互联信息技术有限公司 Online document sharing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144074A (en) * 2018-11-05 2020-05-12 腾讯科技(深圳)有限公司 Document cooperation method and device, computer readable storage medium and computer equipment
CN109800594A (en) * 2018-12-14 2019-05-24 平安普惠企业管理有限公司 Document access authority management method, device and computer equipment
CN109669924A (en) * 2018-12-24 2019-04-23 天津字节跳动科技有限公司 Sharing method, device, electronic equipment and the storage medium of online document
CN109976617A (en) * 2019-04-03 2019-07-05 腾讯科技(深圳)有限公司 Document display method and apparatus
CN113157168A (en) * 2019-04-03 2021-07-23 腾讯科技(深圳)有限公司 Document display method and device
CN114461580A (en) * 2021-12-23 2022-05-10 北京达佳互联信息技术有限公司 Online document sharing method and device, electronic equipment and storage medium
CN114371896A (en) * 2021-12-30 2022-04-19 北京字跳网络技术有限公司 Prompting method, device, equipment and medium based on document sharing
CN114398858A (en) * 2022-01-06 2022-04-26 腾讯科技(深圳)有限公司 Document display method, related device, equipment and storage medium

Also Published As

Publication number Publication date
CN115348240B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN111078655B (en) Document content sharing method, device, terminal and storage medium
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN111901658B (en) Comment information display method and device, terminal and storage medium
CN108897597B (en) Method and device for guiding configuration of live broadcast template
CN110109608B (en) Text display method, text display device, text display terminal and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN110102063B (en) Identification binding method, device, terminal, server and storage medium
CN115378900A (en) Song list sharing method, device, terminal and storage medium
CN112118477A (en) Virtual gift display method, device, equipment and storage medium
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
CN113098700A (en) Group creation method and device, electronic equipment and storage medium
CN113204671A (en) Resource display method, device, terminal, server, medium and product
CN111628925A (en) Song interaction method and device, terminal and storage medium
CN113709022A (en) Message interaction method, device, equipment and storage medium
CN111126958A (en) Schedule creating method, schedule creating device, schedule creating equipment and storage medium
CN111953852B (en) Call record generation method, device, terminal and storage medium
CN113204672B (en) Resource display method, device, computer equipment and medium
CN111158576A (en) Social relationship establishing method and device based on live broadcast scene and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN113518261A (en) Method and device for guiding video playing, computer equipment and storage medium
CN115348240B (en) Voice call method, device, electronic equipment and storage medium for sharing document
CN114100121A (en) Operation control method, device, equipment, storage medium and computer program product
CN111245629B (en) Conference control method, device, equipment and storage medium
CN114245218A (en) Audio and video playing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant