WO2015035865A1 - Methods and systems for controlling microphone order - Google Patents

Methods and systems for controlling microphone order Download PDF

Info

Publication number
WO2015035865A1
WO2015035865A1 PCT/CN2014/085753 CN2014085753W WO2015035865A1 WO 2015035865 A1 WO2015035865 A1 WO 2015035865A1 CN 2014085753 W CN2014085753 W CN 2014085753W WO 2015035865 A1 WO2015035865 A1 WO 2015035865A1
Authority
WO
WIPO (PCT)
Prior art keywords
microphone
server
order
client side
time point
Prior art date
Application number
PCT/CN2014/085753
Other languages
French (fr)
Inventor
Zhe LUO
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015035865A1 publication Critical patent/WO2015035865A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • H04L65/4038Arrangements for multi-party communication, e.g. for conferences with floor control

Definitions

  • the present disclosure generally relates to the field of Internet communication technology and, more particularly, relates to methods, servers, client terminals, and computer systems for controlling microphone order.
  • the multiplayer voice/video business is a social business based on multi media.
  • the control of a speaking order is an important capability.
  • the speaking order can be very easily to be confused in situation of multiplayer voice/video applications.
  • the user experiences can be affected.
  • the two characteristics of the microphone order module are 1) the microphone takeover time length; 2) the microphone order adjusting function.
  • the microphone takeover time length is usually configured to restrict the speaking time of a member who is participating the voice conversation. For example, in some products, after the member participating the voice conversation obtains a speaking right, the corresponding user can obtain a microphone takeover time length as long as 60s. After the microphone takeover time length expires, the client side can be forced to exit the speaking status to signal the end of the speaking session.
  • the microphone order adjusting function is usually configured to adjust the speaking order of members participating the voice conversation. For example, the adjusting function can be moving the speaking order of a member up, down, or to the top, etc.
  • the control logic of current microphone order module is usually completed at the client side, or the control logic of the microphone order is usually written into the hard coding at the client side.
  • the common practice is fix the microphone takeover time length at the client side.
  • client sides with same version have the same microphone takeover time lengths.
  • the modification of the microphone takeover time length needs to wait until the release of a new version of the client side.
  • the common practice is to assign a separate command for each adjusting method as a preset instruction fixed at the client side.
  • the microphone takeover time length and the microphone order control function are all tightly related to the client side version.
  • the microphone takeover time length and the functions that can support an administrator to adjust the microphone order are fixed already. Newer functions can be experienced by updating the client side to a newer version.
  • the corresponding experience can only be obtained by updating the client side, and the client side in old version can show abnormal phenomenon.
  • a framework needs to be proposed to achieve the control of the microphone order from the server side, so as to satisfy the requirement of experiencing new function designs under the microphone order module without updating the client side.
  • the control of microphone takeover time length can be performed by the server. More specifically, the server usually sends a microphone takeover time length to a client side. The client side starts to count down after receiving the microphone takeover time length. Thus, the client side can experience the new set of microphone takeover time length without updating to a new version.
  • the server usually sends a microphone takeover time length to a client side. The client side starts to count down after receiving the microphone takeover time length. Thus, the client side can experience the new set of microphone takeover time length without updating to a new version.
  • the server because the time lengths of information transmission from the server to different client sides are different, the actual speaking time length at the client side often deviates from the microphone takeover time length notified by the server.
  • client sides 102, 104, 106, 108, and 110 are plurality of members in the voice conversation.
  • a server 100 is configured to control these members.
  • the information transmitting time length between the client sides 102, 104, 106 and the server 100 is 0.8s, but the information transmitting time length between the client sides 108, 110 and the server 100 is 2.8s.
  • the microphone takeover time lengths notified by the server to the client sides 102 and 108 are the same, the client sides 102 and 108 receive the notification in sequence.
  • the server starts to count down the speaking time from the time the notification was sent out, the actual speaking time between the client side 102 and the client side 108 has 2s difference, which can result in a situation where the speaking is terminated before the speaking time displayed on the client side is over.
  • the microphone handover time for members participating the voice conversation is hard to control, and the user experience is also affected.
  • the disclosed method, apparatus and system are directed to solve one or more problems set forth above and other problems.
  • a sever sends a first message that is configured to indicate a participation in a first voice conversation client side to a first client side when a speaking status of the first client side is switched from “waiting to speak” to “ready to speak” , wherein the first client side and the server are synchronized in a timeline.
  • the server notifies the first client side or all members participating in the first voice conversation including the first client side of a microphone handover time point corresponding to the first client side, wherein the microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline.
  • the client side receives a first message sent by a server that is configured to indicate a participation in a first voice conversation and switching the speaking status of the client side from “waiting to speak” to “ready to speak” based on the first message when a speaking status of a client side is switched from “waiting to speak” to “ready to speak” , wherein the client side and the server are synchronized in a timeline.
  • the client side receives a microphone handover time point notified by the server corresponding to the client side, and switching the speaking status of the client side from “ready to speak” to “waiting to speak” when the microphone handover time point of the client side arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline.
  • a first sending unit is configured to send a first message that is configured to indicate a participation in a first voice conversation to a first client terminal when a speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” .
  • the first client terminal and the server are synchronized in a timeline.
  • a notifying unit is configured to notify the first client terminal or all members participating in the first voice conversation including the first client terminal of a microphone handover time point corresponding to the first client terminal.
  • the microphone handover time point is configured to indicate that the speaking status of the first client terminal is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives.
  • the microphone handover time point refers a time point designated by the server along the timeline.
  • a client terminal In the client terminal, a first receiving unit is configured to receive a first message sent by a server that is configured to indicate a participation in a first voice conversation when a speaking status of a client terminal is switched from “waiting to speak” to “ready to speak” .
  • the client terminal and the server are synchronized in a timeline.
  • the first receiving unit is further configured to receive a microphone handover time point notified by the server corresponding to the client terminal.
  • the microphone handover time point refers a time point designated by the server along the timeline.
  • a switching unit is configured to switch the speaking status of the client terminal from “waiting to speak” to “ready to speak” based on the first message.
  • the switching unit is further configured to switch the speaking status of the client terminal from “ready to speak” to “waiting to speak” when the microphone handover time point of the client terminal arrives.
  • a system for controlling microphone order including a server and other client terminals.
  • FIG. 1 depicts an exemplary voice conversation environment based on current technology
  • FIG. 2 depicts an exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 3 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 4 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 5 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 6 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 7 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 8 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 9 depicts an exemplary server consistent with various disclosed embodiments.
  • FIG. 10 depicts another exemplary server consistent with various disclosed embodiments
  • FIG. 11 depicts another exemplary server consistent with various disclosed embodiments
  • FIG. 12 depicts another exemplary server consistent with various disclosed embodiments
  • FIG. 13 depicts another exemplary server consistent with various disclosed embodiments
  • FIG. 14 depicts another exemplary server consistent with various disclosed embodiments
  • FIG. 15 depicts an exemplary client terminal consistent with various disclosed embodiments
  • FIG. 16 depicts another exemplary client terminal consistent with various disclosed embodiments
  • FIG. 17 depicts an exemplary computer system consistent with various disclosed embodiments
  • FIG. 18 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 19 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments
  • FIG. 20 depicts an exemplary environment incorporating certain disclosed embodiments.
  • FIG 21 depicts an exemplary computer system consistent with the disclosed embodiments.
  • FIGS. 1-19 depict exemplary methods, servers, client side, and computer systems for controlling microphone order.
  • the exemplary methods, servers, client sides, and computer systems can be implemented, for example, in an exemplary environment 2000 as shown in FIG. 20.
  • the environment 2000 can include a server 2004, a client side 2006, and a communication network 2002.
  • the server 2004 and the client side 2006 may be coupled through the communication network 2002 for information exchange, for example, Internet searching, webpage browsing, etc.
  • client side 2006 and one server 2004 are shown in the environment 2000, any number of client sides 2006 or servers 2004 may be included, and other devices may also be included.
  • the communication network 2002 may include any appropriate type of communication network for providing network connections to the server 2004 and client side 2006 or among multiple servers 2004 or client sides 2006.
  • the communication network 2002 may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless.
  • a client side may refer to any appropriate client terminal device with certain computing capabilities including, for example, a personal computer (PC) , a work station computer, a notebook computer, a car-carrying computer (e. g. , carried in a car or other vehicles) , a server computer, a hand-held computing device (e. g. , a tablet computer) , a mobile terminal (e. g. , a mobile phone, a smart phone, an iPad, and/or an aPad) , a POS (i. e. , point of sale) device, or any other user-side computing device.
  • a client side may also refer to an application program running on the client terminal.
  • a client terminal may run one or more client application programs.
  • a server may refer one or more server computers configured to provide certain server functionalities including, for example, search engines and database management.
  • a server may also include one or more processors to execute computer programs in parallel.
  • FIG. 21 shows a block diagram of an exemplary computing system 2100 capable of implementing the server 2004 and/or the client side 2006.
  • the exemplary computer system 2100 may include a processor 2102, a storage medium 2104, a monitor 2106, a communication module 2108, a database 2110, peripherals 2112, and one or more bus 2114 to couple the devices together. Certain devices may be omitted and other devices may be included.
  • the processor 2102 can include any appropriate processor or processors. Further, the processor 2102 can include multiple cores for multi-thread or parallel processing.
  • the storage medium 2104 may include memory modules, for example, ROM, RAM, and flash memory modules, and mass storages, for example, CD-ROM, U-disk, removable hard disk, etc.
  • the storage medium 2104 may store computer programs for implementing various processes, when executed by the processor 2102.
  • peripherals 2112 may include I/Odevices, for example, keyboard and mouse, and the communication module 2108 may include network devices for establishing connections through the communication network 2002.
  • the database 2110 may include one or more databases for storing certain data and for performing certain operations on the stored data, for example, webpage browsing, database searching, etc.
  • the client side 2006 may cause the server 2004 to perform certain actions, for example, an Internet search or other database operations.
  • the server 2004 may be configured to provide structures and functions for such actions and operations. More particularly, the server 2004 may include a multi-user voice/video conference system for real-time voice/video communication.
  • a terminal for example, a mobile terminal involved in the disclosed methods and systems can include the client side 2006.
  • FIG. 2 depicts an exemplary method for controlling microphone order consistent with various embodiments. As shown in FIG. 2, the method can include the following steps:
  • a server sends a first message that is configured to indicate a participation in a first voice conversation (i. e. , a speaking status of a first client side is switched from “waiting to speak” to “ready to speak” ) to the first client side.
  • the first client side and the server are synchronized in a timeline (i. e. , a time axis or a real time sequence, etc. ) .
  • the server may notify the first client side or all members participating in the first voice conversation (including the first client side) of a microphone handover time point corresponding to the first client side.
  • the microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives.
  • the microphone handover time point refers a time point designated by the server along the timeline.
  • one of the technical problems to be solved by the present disclosure is to provide a method to achieve a goal of more accurate control logic of the microphone order, on the base of the separation of the control logic of the microphone order and the hard coding at the client side.
  • the present disclosure provides a method for controlling microphone order. More specifically, as an advantage of the present disclosure, the above method for controlling microphone order can be implemented in the same or similar application environment as the current method for controlling microphone order without the need to adjust the original framework.
  • FIG. 1 an exemplary implementation environment of this embodiment is depicted in FIG. 1, i. e. , a multiplayer voice conversation environment participated by other client sides of 102, 104, 106, 108, and 110 marked as a first voice conversation.
  • the multiplayer voice conversation environment can be, but not limited to, a pure voice conversation environment, or a multiplayer interactive environment including the element of voice conversation (such as multiplayer video environment) .
  • a member participating the first voice conversation depicted in FIG. 1 is used as an example to provide a detailed description of the embodiment. For the convenience of the description, that member is marked as the first client side.
  • Step 202 the server sends the message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” ) to the first client side.
  • the message is marked as the first message.
  • the first message is sent by the server to the first client side.
  • the first message can be configured to indicate the speaking status of the first client side is switched from “waiting to speak” to “ready to speak, and/or, to indicate a switching time point of the speaking status of the first client side is switched from “waiting to speak” to “ready to speak.
  • the voice information inputted by the user from the first client side is shielded by other members participating the first voice conversation.
  • the voice information of the user using the first client side cannot be transmitted instantaneously to one or more users using other client sides in the first voice conversation.
  • the voice information inputted by the user from the first client side can be received by other members participating the first voice conversation. More specifically, the voice information can be transmitted to other client sides by, but not limited to, the server, and/or by the first client side directly.
  • the first client side can have multiple ways to achieve the above waiting to speaking status and the ready to speak status. For example, when the first client side is at the status of waiting to speak, the voice input apparatus corresponding to the first client side and/or the voice channel currently used by the first client side can be shielded.
  • the shielding of the above voice input apparatus and/or the voice channel can be lifted.
  • voice information obtained from the first client side can be selected not to be transmitted.
  • the voice information obtained from the first client side can be transmitted to the outside of the first client side to be received by other members participating the first voice conversation.
  • the transmitting operation of the firs message of Step 202 can include the following two situations.
  • the server sends the first message to the first client side based on the control logic of the microphone order, or the server takes the initiative to notify the first client side to take over the microphone;
  • the server responds to the query request from the first client side, and sends the first message (i. e. , the microphone takeover time point, including the specific time point to instruct the first client side to switch from the waiting to speak status to the ready to speak status) to the first client side.
  • the first message i. e. , the microphone takeover time point, including the specific time point to instruct the first client side to switch from the waiting to speak status to the ready to speak status
  • the first message may be sent by, but not limited to, other feasible ways. It is understood that the above implementing methods should be considered to be within the scope of the protection.
  • the specific format of the first message can be an http (Hypertext Transfer Protocol) message.
  • http Hypertext Transfer Protocol
  • the format of the first message can be an ftp (File Transfer Protocol) messages, or other feasible requests meet the requirements of text transmitting format.
  • FIG. 3 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments.
  • the first client side can be synchronized with the server in advance, e. g. , as depicted in FIG. 3, before the Step 202 or the Step 204, the above method for controlling microphone order can includes:
  • Step 302 the server notifies the first client side of a client side time used for synchronization obtained from a server time and information transmitting time length between the server and the first client side.
  • Step 304 the server notifies each member of the corresponding client side time used for synchronization obtained from the server time and the information transmitting time length between the server and each member participating in the first voice conversation (including the first client side) .
  • the server can synchronize with the client side after the client side builds a connection with the server. For example, in Step 302, the server can notify the first client side of the client side time used for synchronization.
  • the client side time used for synchronization can be obtained from the server time and the information transmitting time length between the server and the first client side.
  • the information transmitting time length between the server and the first client side is detected to be 1.8s, i. e. , the delay of the time in transmitting the first message is 1.8s.
  • the above client side time used for synchronization can be obtained by the following formula:
  • T 0 is the server time
  • T 1 is the client side time of the first client side.
  • the client side time T1 in Step 302 is notified to the first client side. And the first client side receives the client side T1 after a 1.8s delay. At the same time that the first client side receives the notification, the server time T0 also increases by 1.8s. Thus, the time values of the client side and the server are the same at this moment, and the time of the client side and the time of the server are synchronized after this moment, i. e. the first client side is synchronized with the connected server.
  • the information transmitting time length between the server and each member can be detected.
  • the client side time corresponding to each member can be obtained based on similar formula, and be sent to corresponding member in Step 304.
  • the synchronization of each member with the server can be achieved, and the more accurate control of the speaking time of the first client side can be achieved in combination with Step 204.
  • the above described methods are just certain examples for the first client side to synchronize with the server. In other embodiments, other feasible methods can be used to synchronize the first client side with the server.
  • FIG. 4 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments.
  • FIG. 4 depicts another exemplary implementation method. More specifically, as shown in FIG. 4, Step 204 can include the following steps.
  • Step 402 the microphone handover time point designated by the server is calibrated or adjusted based on the time difference between the time of the server and the time of the client side, and the information transmitting time length between the server and the first client side;
  • Step 404 the adjusted microphone handover time point is notified to the first client side or all members participating the first voice conversation (including the first client side) .
  • the time difference of 2.6s is detected, i. e. , the time of the first client side is 2.6s behind the time of the server.
  • the information transmitting time length of 1.8s is also detected, i. e. , the delay of the time in transmitting the first message is 1.8s.
  • T 2 is the microphone handover time point designated by the server
  • T 3 is the adjusted microphone takeover time point.
  • the first client side because the first client side is synchronized with the server, the first client side can be regarded as being synchronized with the server in the same timeline.
  • the microphone handover time point may be an absolute time point, the microphone handover time point can be regarded as a time point designated by the server on the timeline.
  • Step 204 the first client side or all members participating in the first voice conversation (including the first client side) can be notified of a microphone handover time point corresponding to the first client side with substantial accuracy.
  • the absolute time point to indicate that the speaking status of the client side can be switched from the ready to speak to the waiting to speak i. e. , the above microphone handover time point
  • the microphone handover time point can be a time point designated by the server.
  • the first client side is synchronized with the server in the timeline.
  • the time of the client side and the time of the server are synchronized.
  • the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client side time point synchronized with the server.
  • the client side can execute the microphone handover operation (i. e. , switch from the ready to speak status to the waiting to speak status) accurately based on the received microphone handover time point.
  • This method avoids the dependence on the hard coding at the client side to control the execution of the microphone handover operation of the first client side.
  • This method also eliminates the interference on the accurate control of the microphone handover time point of the first client side caused by the information transmitting time and other factors.
  • this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client side, so as to solve the technical problem that the microphone handover time point of the client side is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.
  • client side there is no limitation on the specific method of performing microphone handover operations by the first client side.
  • the operation that the server notifies the first client side of the microphone handover time point can, but not limited to, be performed as that the server takes the initiative to execute the notifying operation based on the control logic, or be performed as that the server responds to the received query information sent by the first client side.
  • the above notification of the microphone handover time point can, but not limited to, be performed as that the server sends a message separately, or be performed as that the server adds the notification to other information sent to the first client side, or can be performed as that the server sends the notification with the first message at the same time, or can be performed as that the server adds the microphone takeover time point into the first message sent to the first client side.
  • the server can notify the first client side of the microphone handover time point pre-calculated from the control logic before sending the first message of indicating the first client side to take over the microphone, so that the first client side can be notified of the preset microphone handover time point before the first client side take over the microphone.
  • the server can choose to send the first message to the first client side at first, and then notify the first client side of the microphone takeover time based on the sending time of the first message or based on the exact microphone takeover time point (the time point where the status of the first client side is switched from the waiting to speak to the ready to speak) received by the server, so as to have more accurate control of the duration of the ready to speak status of the first client side, i. e. , the speaking time.
  • FIG. 5 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments.
  • the above method for controlling microphone order can further include the followings.
  • the server obtains the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client side.
  • the microphone takeover time point refers a time point along the timeline when the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” .
  • the server can obtain the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client side. More specifically, the microphone takeover time point can be a time point along the timeline when the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” .
  • the preset microphone takeover time length can be the time length obtained by the server as a limitation to the speaking time for the members participating the first voice conversation. By setting a length for this time, the speaking time of the members can have a unified management, so as to provide better user experience for users of these client sides, and to provide more efficiency and better service for participants and managers of the voice conversation.
  • the server can obtain the microphone handover time based on the following formula:
  • T off is the microphone handover time point
  • T on is the microphone takeover time point of the first client side obtained by the server based on the control logic of the microphone order.
  • D is the preset microphone takeover time length.
  • the microphone handover time point can be obtained by other methods based on the microphone takeover time point and the preset microphone takeover time length.
  • the above microphone handover time point can be calibrated or adjusted by the information transmitting time length needed between the server and the client side.
  • the microphone handover time point can be obtained by other methods.
  • the microphone handover time point can, but not limited to, be set as a series of time points with a fixed time interval along the timeline of the server.
  • FIG. 6 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. Based on the above description, as depicted in FIG. 6, after Step 204, the method for controlling microphone order can further include the followings.
  • Step 602 the server sends a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client side is switched from “waiting to speak” to “ready to speak” ) to the second client side when the server reaches the microphone handover time point.
  • the second client side is located next to the first client side in the first sequence of members.
  • the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
  • Step 604 the server notifies the second client side or all members participating in the first voice conversation (including the first client side and the second client side) of the microphone handover time point corresponding to the second client side.
  • members who are waiting to speak can be sorted by a sequence of members.
  • one of the members can own an exclusive privilege to speak at one time in the first voice conversation, so as to avoid the interruption caused by the speaking of other members at the same time.
  • the above member owning the speaking privilege can be the first member of the current sequence of members.
  • Other members waiting to speak can be the second to the Nth member of the sequence of members.
  • the speaking privilege can be owned one by one based on this order after the current member hands over the microphone.
  • the sequence of members corresponding to members participating the first voice conversation can be marked as the first sequence of members.
  • the first sequence of members can either include all members participating the first voice conversation, or include only members recorded in the server who are waiting to speak in the first voice conversation. More specifically, members who are waiting to speak can be members who request speaking privilege from the server. In another words, after the server receives the request for the speaking privilege, the server can mark the member who sends the request as a member waiting to speak, and record the order of members waiting to speak to form the first sequence of members. In the above scenario, members who have not sent request for speaking privilege may, but not limited to, not appear in the sequence of members.
  • control logic can be: setting the status of the member owing the speaking privilege currently as ready to speak, setting the status of other members waiting to speak as waiting to speak, and implementing the control of members participating the first voice conversation based the above method for controlling microphone order.
  • the server sends a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client side is switched from “waiting to speak” to “ready to speak” ) to the second client side when the server reaches the microphone handover time point.
  • the second client side is located next to the first client side in the first sequence of members.
  • the specific implementation of the method that the server sends a microphone takeover instruction to the second client side can be similar to the first client side.
  • the server can also notify the second client side or all members participating in the first voice conversation (including the first client side and the second client side) of the microphone handover time point corresponding to the second client side, so as to achieve accurate control over the microphone takeover time point and the microphone handover time point of the second client side.
  • the accurate control over all members participating the first voice conversation can be achieved based on the preset control logic of the microphone order.
  • the specific format of the first sequence of members can be varied.
  • the control logic of the microphone order can also vary.
  • the server can only record another member obtained through an election or other feasible selection mechanism besides the member owing the speaking privilege currently, marked as grabbing success member. After the current speaking member hands over the microphone, the server can, but not limited to, grant the speaking privilege to the grabbing success member.
  • the above method for controlling microphone order does not rely on the control logic of the microphone order provides the necessary condition for achieving the control logic of the microphone order.
  • FIG. 7 depicts another exemplary method for controlling microphone order consistent with various disclosed As depicted in FIG. 7, the above method for controlling microphone order can include the followings.
  • the server sends member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation.
  • the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
  • Step 704 the server sends the member ID information and the destination order information of a member who needs microphone order adjustment in a second sequence of members to one or more members participating in a second voice conversation.
  • the second sequence of members is configured to indicate the speaking order of members participating in the second voice conversation.
  • member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members can be sent to one or more members participating in the first voice conversation.
  • the member who needs microphone order adjustment can be the adjusting member specified in the news indicating the adjusted speaking order obtained by the server.
  • this news can include the following information: (ID of adjusting member, adjusted position after adjustment) .
  • the ID of adjusting member can be a 32bit number, configured to represent member ID information.
  • the adjusted position after adjustment can be a 16bit number, configured to represent the destination order information.
  • the members of the first voice conversation can further timely update the microphone order of the current conversation at the local client side, and execute feasible processing operations based on updated microphone order, or display updated microphone order on the display device for the user to view the microphone order of the current conversation.
  • both the member ID information and the destination order information of the member who needs microphone order adjustment are information reflecting the goal of the microphone order adjustment or reflecting the facts of the results, and unrelated to the preset instruction or the adjusting logic of the microphone order.
  • the adjusting logic of the microphone order can be moving one member up for 1 position in one operation.
  • the updated adjusting logic of the microphone order the same operation or other operation can be moving one member up for 2 positions. Other methods may also be used.
  • Step 702 or Step 704 there is no necessary specific order between Step 702 or Step 704 and other steps of the above method for controlling microphone order.
  • Step 702 and Step 704 can be executed after Step 204, or before Step 202, or between Step 202 and Step 204.
  • a method of notifying the client side of the member ID information and destination order information is provided.
  • This method achieves the goal of notifying the client side of the adjusting information of the microphone order. More specifically, the server does not send instructions corresponding to the adjusting operation of the microphone order to the client side. Instead, the server sends the member ID information and the destination order information of the member who needs microphone order adjustment to the client side directly.
  • This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client side has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client side.
  • the server only needs to notify the client side of the adjusting member under the microphone order adjusting function, and does not need to notify each client side of members who are adjusted passively or the entire updated sequence of members.
  • the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
  • the server configured to monitor the first voice conversation can also be configured to monitor the second voice conversation.
  • the above server can also send the member ID information and the destination order information of a member who needs microphone order adjustment in a second sequence of members to one or more members participating in the second voice conversation. More specifically, the server can control the microphone takeover time point and the microphone handover time point of the members participating the first voice conversation, the server can also keep controlling the microphone order adjustment of members participating the second voice conversation.
  • FIG. 8 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. As shown in FIG. 8, before Step 702 or Sep 704, the above method for controlling microphone order can further include the followings.
  • the server receives a preset instruction corresponding to a microphone order adjusting operation.
  • the microphone order adjusting operation includes at least one of the followings: moving up of the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs microphone order adjustment to position N.
  • Step 804 the server parses the received preset instruction to obtain corresponding microphone order adjusting operation, and obtains the adjusted speaking order of the member who needs microphone order adjustment based on the parsed microphone order adjusting operation as the destination order information.
  • Step 802 and Step 804 the same technology effect can be achieved.
  • the server can receive the preset instruction corresponding to the adjusting operation of the microphone order.
  • the preset instruction can have the format of: (Operation command, ID of adjusting member) .
  • the operation command can be a 16bit number. Different adjusting operations of the microphone order correspond to different numbers.
  • the ID of adjusting member can be a 32bit number, configured to represent the ID information of the member who needs microphone order adjustment, i. e. , the ID of adjusting member can be parsed to the ID information of the above member.
  • the operation command "402" can represent the adjustment of moving a member upward.
  • the server can parse this preset instruction further in Step 804 to obtain the adjusted order of the member.
  • the 5th position before the adjustment can be adjusted to 4th position.
  • the 4th position becomes the destination order information of the member and is sent to the client side with the member ID information.
  • FIG. 9 depicts an exemplary server consistent with various disclosed embodiments.
  • the server includes a first sending unit 902 and a notifying unit 904.
  • the first sending unit 902 is configured to send a first message that is configured to indicate a participation in a first voice conversation (i. e. , a speaking status of a first client terminal is switched from “waiting to speak” to “ready to speak” ) to the first client terminal.
  • the first client terminal and the server are synchronized in a timeline.
  • the notifying unit 904 is configured to notify the first client terminal or all members participating in the first voice conversation (including the first client terminal) of a microphone handover time point corresponding to the first client terminal.
  • the microphone handover time point is configured to indicate that the speaking status of the first client terminal is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives.
  • the microphone handover time point refers a time point designated by the server along the timeline.
  • one of the technical problems to be solved by the present disclosure is to provide a server to achieve a goal of more accurate control logic of the microphone order, on the base of the separation of the control logic of the microphone order and the hard coding at the client terminal.
  • the present disclosure provides a server. More specifically, as an advantage of the present disclosure, the above server can be implemented in the same or similar application environment as the current technology without the need to adjust the original framework.
  • FIG. 1 an exemplary implementation environment of this embodiment is depicted in FIG. 1, i. e. , a multiplayer voice conversation environment participated by other client terminals of 102, 104, 106, 108, and 110 marked as a first voice conversation.
  • the multiplayer voice conversation environment can be, but not limited to, a pure voice conversation environment, or a multiplayer interactive environment including the element of voice conversation (such as multiplayer video environment) .
  • a member participating the first voice conversation depicted in FIG. 1 is used as an example to provide a detailed description of the embodiment. For the convenience of the description, that member is marked as the first client terminal.
  • the first sending unit 902 con be configured to send the message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” ) to the first client side.
  • the message is marked as the first message.
  • the first message is sent by the first sending unit 902 to the first client terminal.
  • the first message can be configured to indicate the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak, and/or, to indicate a switching time point of the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak.
  • the voice information inputted by the user from the first client terminal is shielded by other members participating the first voice conversation.
  • the voice information of the user using the first client terminal cannot be transmitted instantaneously to one or more users using other client terminals in the first voice conversation.
  • the voice information inputted by the user from the first client terminal can be received by other members participating the first voice conversation. More specifically, the voice information can be transmitted to other client terminals by, but not limited to, the server, and/or by the first client terminal directly.
  • the first client terminal can have multiple ways to achieve the above waiting to speaking status and the ready to speak status. For example, in one embodiment, when the first client terminal is at the status of waiting to speak, the voice input apparatus corresponding to the first client terminal and/or the voice channel currently used by the first client terminal can be shielded. When the first client terminal is at the status of ready to speak, the shielding of the above voice input apparatus and/or the voice channel can be lifted. In another embodiment, when the first client terminal is at the status of waiting to speak, voice information obtained from the first client terminal can be selected not to be transmitted. When the first client terminal is at the speaking status of ready to speak, the voice information obtained from the first client terminal can be transmitted to the outside of the first client terminal to be received by other members participating the first voice conversation.
  • the transmitting operation of the firs message of the first sending unit 902 can include the following two situations:
  • the first sending unit 902 sends the first message to the first client terminal based on the control logic of the microphone order, or the server take the initiative to notify the first client terminal to take over the microphone;
  • the server responds to the query request from the first client terminal, through the first sending unit 902, the first message (i. e. , the microphone takeover time point, including the specific time point to instruct the first client terminal to switch from the waiting to speak status to the ready to speak status) is sent to the first client terminal.
  • the first message i. e. , the microphone takeover time point, including the specific time point to instruct the first client terminal to switch from the waiting to speak status to the ready to speak status
  • the disclosed embodiment can include, but not limited to, other feasible ways to send the first message.
  • the specific format of the first message can be an http (Hypertext Transfer Protocol) message.
  • the format of the first message can be an ftp (File Transfer Protocol) messages, or other feasible requests meet the requirements of text transmitting format.
  • FIG. 10 depicts another exemplary server consistent with various disclosed embodiments.
  • the first client terminal can be synchronized with the server in advance, e. g. , as depicted in FIG. 10, the above server can further include a synchronizing unit 1002.
  • the synchronizing unit 1002 is configured to notify the first client terminal of a client terminal time used for synchronization obtained from a server time and information transmitting time length between the server and the first client terminal. Or, the synchronizing unit 1002 is configured to notify each member of the corresponding client terminal time used for synchronization obtained from the server time and the information transmitting time length between the server and each member participating in the first voice conversation (including the first client terminal) .
  • the server can synchronize with the client terminal after the client terminal builds a connection with the server.
  • the synchronizing unit 1002 can notify the first client terminal of the client terminal time used for synchronization.
  • the client terminal time used for synchronization can be obtained from the server time and the information transmitting time length between the server and the first client terminal.
  • the information transmitting time length between the server and the first client terminal is detected to be 1.8s, i. e. , the delay of the time in transmitting the first message is 1.8s.
  • the above client terminal time used for synchronization can be obtained by the following formula:
  • T 0 is the server time
  • T 1 is the client terminal time of the first client terminal.
  • the client terminal time T1 in step synchronizing unit 1002 is notified to the first client terminal. And the first client terminal receives the client terminal T1 after a 1.8s delay. At the same time that the first client terminal receives the notification, the server time T0 also increases by 1.8s. Thus the time values of the client terminal and the server are the same at this moment, and the time of the client terminal and the time of the server are synchronized after this moment, i. e. the first client terminal is synchronized with the connected server.
  • the information transmitting time length between the server and each member can be detected.
  • the client terminal time corresponding to each member can be obtained based on similar formula, and be sent to corresponding member in step synchronizing 1002.
  • the synchronization of each member with the server can be achieved, and the more accurate control of the speaking time of the first client terminal can be achieved in combination with notifying unit 904.
  • the above described methods are just certain examples the first client terminal to synchronize with the server.
  • other feasible method can be used to synchronize the first client terminal with the server.
  • FIG. 11 depicts another exemplary server consistent with various disclosed embodiments, as another example of implementing terminal synchronization. More specifically, as shown in FIG. 11, the notifying unit 904 can include a calibrating module 1102 and a notifying module 1104.
  • the calibrating module 1102 is configured to calibrate the microphone handover time point designated by the server based on the time difference between the time of the server and the time of the client terminal, and the information transmitting time length between the server and the first client terminal.
  • the notifying module 1104 is configured to notify the adjusted microphone handover time point to the first client terminal or all members participating the first voice conversation (including the first client terminal) .
  • the calibrating module 1102 can execute the calibration based on the following formula:
  • T 2 is the microphone handover time point designated by the server
  • T 3 is the adjusted microphone takeover time point.
  • the first client terminal because the first client terminal is synchronized with the server, the first client terminal can be regarded as being synchronized with the server in the same timeline. Because the microphone handover time point may be an absolute time point, the microphone handover time point can be regarded as a time point designated by the server on the timeline. Based on the above described server, in notifying unit 904, the first client terminal or all members participating in the first voice conversation (including the first client terminal) can be notified of a microphone handover time point corresponding to the first client terminal.
  • the control method of writing the control logic of the microphone order into the hard coding at the client terminal in advance, nor the control method of sending the pre-set microphone takeover time length to the client terminal are adopted.
  • the absolute time point to indicate that the speaking status of the client terminal can be switched from the ready to speak to the waiting to speak i. e. , the above microphone handover time point
  • the microphone handover time point can be a time point designated by the server.
  • the first client terminal is synchronized with the server in the timeline.
  • the time of the client terminal and the time of the server are synchronized.
  • the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client terminal time point synchronized with the server.
  • the client terminal can execute the microphone handover operation (i. e. , switch from the ready to speak status to the waiting to speak status) accurately based on the received microphone handover time point.
  • This method avoids the dependence on the coding at the client terminal to control the execution of the microphone handover operation of the first client terminal.
  • This method also eliminates the interference on the accurate control of the microphone handover time point of the first client terminal caused by the information transmitting time and other factors.
  • this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client terminal, so as to solve the technical problem that the microphone handover time point of the client terminal is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the coding at the client terminal in current technology.
  • the operation that the server notifies the first client terminal of the microphone handover time point can, but not limited to, be performed as that the server takes the initiative to execute the notifying operation based on the control logic, or be performed as that the server responds to the received query information sent by the first client terminal.
  • the above notification of the microphone handover time point can, but not limited to, be performed as that the server sends a message separately, or be performed as that the server adds the notification to other information sent to the first client terminal, or can be performed as that the server sends the notification with the first message at the same time, or can be performed as that the server adds the microphone takeover time point into the first message sent to the first client terminal.
  • the server can notify the first client terminal of the microphone handover time point pre-calculated from the control logic before sending the first message of indicating the first client terminal to take over the microphone, so that the first client terminal can be notified of the preset microphone handover time point before the first client terminal take over the microphone.
  • the server can choose to send the first message to the first client terminal at first, and then notify the first client terminal of the microphone takeover time based on the sending time of the first message or based on the exact microphone takeover time point (the time point where the status of the first client terminal is switched from the waiting to speak to the ready to speak) received by the server, so as to have more accurate control of the duration of the ready to speak status of the first client terminal, i. e. , the speaking time.
  • FIG. 12 depicts another exemplary server consistent with various disclosed embodiments.
  • the above server can further include an obtaining unit 1202.
  • the obtaining unit 1202 is configured to obtain the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client terminal.
  • the microphone takeover time point refers a time point along the timeline when the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” .
  • the obtaining unit 1202 can obtain the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client terminal. More specifically, the microphone takeover time point can be a time point along the timeline when the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” .
  • the preset microphone takeover time length can be the time length obtained by the server as a limitation to the speaking time for the members participating the first voice conversation. By setting a length for this time, the speaking time of the members can have a unified management, so as to provide better user experience for users of these client terminals, and to provide more efficiency and better service for participants and managers of the voice conversation.
  • the obtaining unit 1202 can obtain the microphone handover time based on the following formula:
  • T off is the microphone handover time point
  • T on is the microphone takeover time point of the first client terminal obtained by the server based on the control logic of the microphone order.
  • D is the preset microphone takeover time length.
  • the microphone handover time point can be obtained by other methods based on the microphone takeover time point and the preset microphone takeover time length.
  • the above microphone handover time point can be calibrated or adjusted by the information transmitting time length needed between the server and the client terminal.
  • the microphone handover time point can be obtained by other methods too.
  • the microphone handover time point can, but not limited to, be set as a serial of time points with a fixed time interval along the timeline of the server.
  • the first sending unit 902 can further be configured to send a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client terminal is switched from “waiting to speak” to “ready to speak” ) to the second client terminal when the server reaches the microphone handover time point.
  • the second client terminal is located next to the first client terminal in the first sequence of members.
  • the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
  • the notifying unit can further be configured to notify the second client terminal or all members participating in the first voice conversation (including the first client terminal and the second client terminal) of the microphone handover time point corresponding to the second client terminal.
  • members who are waiting to speak can be sorted by a sequence of members.
  • one of the members can own an exclusive privilege to speak at one time in the first voice conversation, so as to avoid the interruption caused by the speaking of other members at the same time.
  • the above member owning the speaking privilege can be the first member of the current sequence of members.
  • Other members waiting to speak can be the second to the Nth member of the sequence of members.
  • the speaking privilege can be owned one by one based on this order after the current member hands over the microphone.
  • the sequence of members corresponding to members participating the first voice conversation can be marked as the first sequence of members.
  • the first sequence of members can either include all members participating the first voice conversation, or include only members recorded in the server who are waiting to speak in the first voice conversation. More specifically, members who are waiting to speak can be members who request speaking privilege from the server. In another words, after the server receives the request for the speaking privilege, the server can mark the member who sends the request as a member waiting to speak, and record the order of members waiting to speak to form the first sequence of members. In the above scenario, members who have not sent request for speaking privilege may, but not limited to, not appear in the sequence of members.
  • control logic can be: setting the status of the member owing the speaking privilege currently as ready to speak, setting the status of other members waiting to speak as waiting to speak, and implementing the control of members participating the first voice conversation based the above method for controlling microphone order.
  • the first sending unit 902 can send a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client terminal is switched from “waiting to speak” to “ready to speak” ) to the second client terminal when the server reaches the microphone handover time point.
  • the second client terminal is located next to the first client terminal in the first sequence of members.
  • the specific implementation of the method that the first sending unit 902 sends a microphone takeover instruction to the second client terminal can be similar to the first client terminal.
  • the notifying unit 904 can further notify the second client terminal or all members participating in the first voice conversation (including the first client terminal and the second client terminal) of the microphone handover time point corresponding to the second client terminal, so as to achieve accurate control over the microphone takeover time point and the microphone handover time point of the second client terminal.
  • the accurate control over all members participating the first voice conversation can be achieved based on the preset control logic of the microphone order.
  • the specific format of the first sequence of members can be varied.
  • the control logic of the microphone order can also have other implementations.
  • the server can only record another member obtained through an election or other feasible selection mechanism besides the member owing the speaking privilege currently, marked as grabbing success member. After the current speaking member hands over the microphone, the server can, but not limited to, grant the speaking privilege to the grabbing success member.
  • the above method for controlling microphone order does not rely on the control logic of the microphone order provides the necessary condition for achieving the control logic of the microphone order.
  • FIG. 13 depicts another exemplary server consistent with various disclosed embodiments.
  • the above server can include a second sending unit 1302.
  • the second sending unit 1302 is configured to send member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation, or to send the member ID information and the destination order information of a member who needs order microphone adjustment in a second sequence of members to one or more members participating in a second voice conversation.
  • the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
  • the second sequence of members is configured to indicate the speaking order of members participating in the second voice conversation.
  • the second sending unit 1302 can send member ID and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation.
  • the member who needs order microphone adjustment can be the adjusting member specified in the news indicating the adjusted speaking order obtained by the server.
  • this news can include the following information:
  • ID of adjusting member can be a 32bit number, configured to represent member ID information.
  • the adjusted position after adjustment can be a 16bit number, configured to represent the destination order information.
  • the members of the first voice conversation can further timely update the microphone order of the current conversation at the local client terminal, and execute feasible processing operations based on updated microphone order, or display updated microphone order on the display device for the user to view the microphone order of the current conversation.
  • both the member ID information and the destination order information of the member who needs microphone order adjustment are information reflecting the goal of the microphone order adjustment or reflecting the facts of the results, and unrelated to the preset instruction or the adjusting logic of the microphone order.
  • the adjusting logic of the microphone order can be moving one member up for 1 position in one operation.
  • the updated adjusting logic of the microphone order the same operation or other operation can be moving one member up for 2 positions. Other methods may also be used.
  • method of notifying the client terminal of the member ID information and destination order information is adopted.
  • This method achieves the goal of notifying the client terminal of the adjusting information of the microphone order. More specifically, the server does not send instructions corresponding to the adjusting operation of the microphone order to the client terminal. Instead, the server sends the member ID information and the destination order information of the member who needs microphone order adjustment to the client terminal directly.
  • This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client terminal has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client terminal.
  • the server only needs to notify the client terminal of the adjusting member under the microphone order adjusting function, and does not need to notify each client terminal of members who are adjusted passively or the entire updated sequence of members.
  • the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
  • the server configured to monitor the first voice conversation can also be configured to monitor the second voice conversation.
  • the above server can also send, through the second sending unit 1302, the member ID information and the destination order information of a member who needs microphone order adjustment in a second sequence of members to one or more members participating in the second voice conversation.
  • the above server can control the microphone takeover time point and the microphone handover time point of the members participating the first voice conversation, the server can also keep controlling the microphone order adjustment of members participating the second voice conversation.
  • FIG. 13 depicts another exemplary server consistent with various disclosed embodiments. Further, considering the compatibility of the technology design of the disclosure and the control logic of the microphone order, as depicted in FIG. 14, coupling with the second sending unit 1302, the above server can further include a receiving unit 1402 and a parsing unit 1404.
  • the receiving unit 1402 is configured to receive a preset instruction corresponding to a microphone order adjusting operation.
  • the parsing unit 1404 is configured to parse the received preset instruction to obtain corresponding microphone order adjusting operation, and obtains the adjusted speaking order of the member who needs microphone order adjustment based on the parsed microphone order adjusting operation as the destination order information.
  • the microphone order adjusting operations occurred frequently in the control logic of the microphone order can include: moving up of the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs microphone order adjustment to position 2.
  • the receiving unit 1402 can receive the preset instruction corresponding to the adjusting operation of the microphone order.
  • the preset instruction can have the following format:
  • Operation command can be a 16bit number. Different adjusting operations of the microphone order correspond to different numbers.
  • the ID of adjusting member can be a 32bit number, configured to represent the ID information of the member who needs microphone order adjustment, i. e. , the ID of adjusting member can be parsed to the ID information of the above member.
  • the operation command "402" can represent the adjustment of moving a member upward.
  • the parsing unit 1404 can parse this preset instruction further in Step 804 to obtain the adjusted order of the member. For example, the 5th position before the adjustment can be adjusted to 4th position. And then in the second sending unit 1302, the 4th position becomes the destination order information of the member and is sent to the client terminal with the member ID information.
  • FIG. 18 depicts another exemplary method for controlling microphone order consistent with various embodiments. As shown in FIG. 18, the method can include the following steps:
  • Step 1802 the client side receives a third message (sent by a server) that is configured to indicate a participation in a third voice conversation (i. e. , a speaking status of a client side is switched from “waiting to speak” to “ready to speak” ) , and switches the speaking status of the client side from “waiting to speak” to “ready to speak” based on the third message.
  • the client side and the server are synchronized in a timeline.
  • Step 1804 the client side receives a microphone handover time point notified by the server corresponding to the client side, and switches the speaking status of the client side from “ready to speak” to “waiting to speak” when the microphone handover time point of the client side arrives.
  • the microphone handover time point refers a time point designated by the server along the timeline.
  • the client side receives the microphone handover time point from the server (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction received from the server, the client side achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server.
  • the client side receives the microphone handover time point from the server (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction received from the server, the client side achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server.
  • the goal of the separation of the control logic of the microphone order and the hard coding at the client side is achieved.
  • the unity of the absolute time the accurate control over the microphone handover time point of the client side is achieved.
  • the first client side is synchronized with the server in the timeline.
  • the time of the client side and the time of the server are synchronized.
  • the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client side time point synchronized with the server.
  • this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client side, so as to solve the technical problem that the microphone handover time point of the client side is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.
  • FIG. 19 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. As depicted in FIG. 19, the above method for controlling microphone order can further including the followings.
  • Step 1902 the client side receives a member ID information and a destination order information (sent by the server) of a member who needs microphone order adjustment in a third sequence of members.
  • the third sequence of members is configured to indicate the speaking order of members participating in the third voice conversation.
  • Step 1904 the client side parses the member ID and the destination order information, and obtains the adjusted third sequence of members.
  • this method achieves the goal of notifying the client side of the adjusting information of the microphone order. More specifically, the client side does not receive instructions corresponding to the adjusting operation of the microphone order from the server. Instead, the client side receives the member ID information and the destination order information of the member who needs microphone order adjustment directly.
  • This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client side has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client side.
  • the client side only needs to receive the adjusting member under the microphone order adjusting function, and does not need to receive members who are adjusted passively or the entire updated sequence of members.
  • the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
  • FIG. 15 depicts an exemplary client terminal consistent with various embodiments.
  • the client terminal can include a first receiving unit 1502 and a switching unit 1504.
  • the first receiving unit 1502 is configured to receive a third message (sent by a server) that is configured to indicate a participation in a third voice conversation (i. e. , a speaking status of a client terminal is switched from “waiting to speak” to “ready to speak” ) .
  • the client terminal and the server are synchronized in a timeline.
  • the first receiving unit is further configured to receive a microphone handover time point notified by the server corresponding to the client terminal.
  • the microphone handover time point refers a time point designated by the server along the timeline.
  • the switching unit 1504 is configured, coupling with the above first receiving unit 1502, to switch the speaking status of the client terminal from “waiting to speak” to “ready to speak” based on the third message.
  • the switching unit is further configured to switch the speaking status of the client terminal from “ready to speak” to “waiting to speak” when the microphone handover time point of the client terminal arrives.
  • the client terminal receives the microphone handover time point from the server (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction received from the server, the client terminal achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server.
  • the client terminal receives the microphone handover time point from the server (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction received from the server, the client terminal achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server.
  • the client terminal receives the microphone handover time point, the goal of the separation of the control logic of the microphone order and the hard coding at the client terminal is achieved.
  • the unity of the absolute time the accurate control over the microphone handover time point of the client terminal is achieved.
  • the first client terminal is synchronized with the server in the timeline.
  • the time of the client terminal and the time of the server are synchronized.
  • the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client terminal time point synchronized with the server.
  • FIG. 16 depicts another exemplary client side consistent with various embodiments.
  • the client terminal can further include a second receiving unit 1602 and a second parsing unit 1604.
  • the second receiving unit 1602 is configured to receive a member ID information and a destination order information (sent by the server) of a member who needs microphone order adjustment in a third sequence of members.
  • the third sequence of members is configured to indicate the speaking order of members participating in the third voice conversation.
  • the second parsing unit 1604 is configured to parse the member ID and the destination order information, and to obtain the adjusted third sequence of members.
  • this method achieves the goal of notifying the client terminal of the adjusting information of the microphone order. More specifically, the client terminal does not receive instructions corresponding to the adjusting operation of the microphone order from the server. Instead, the client terminal receives the member ID information and the destination order information of the member who needs microphone order adjustment directly.
  • This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client terminal has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client terminal.
  • the client terminal only needs to receive the adjusting member under the microphone order adjusting function, and does not need to receive members who are adjusted passively or the entire updated sequence of members.
  • the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
  • FIG. 17 depicts an exemplary computer system consistent with various disclosed embodiments.
  • the system includes a server 1702 and other client terminal 1704.
  • client terminals 1704 are connected with the server 1702.
  • the client terminals or client sides are members participating the same voice conversation.
  • a server 1702 and other client terminals 1704 forms a computer system similar to the client-server structure.
  • the server 1702 can be any server consistent with various disclosed embodiments.
  • One or more of the plurality of client terminals 1704 can be any client terminals consistent with various disclosed embodiments.
  • the server 1702 can send information including the microphone takeover indication through the first sending unit 902 to one of client terminal 1704 owing the speaking privilege currently, i. e. , a member of the voice conversation. Then, this member can receive message through the second receiving unit 1502, and finish the microphone takeover operation through the switching unit 1504 (switching the waiting to speak status to the ready to speak status) based on the instruction from the server 1702.
  • the server 1702 can notify the member of the microphone handover time point through the notifying unit 904.
  • the member can receive the microphone handover time point through the first receiving unit 1502.
  • the member can finish the microphone handover operation through the switching unit 1504 (switching the ready to speak status to the waiting to speak status) .
  • the server 1702 sends the microphone handover time point to the client terminals 1704 (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction sent from the server 1702, the client terminal 1704 achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server 1702.
  • the client terminal 1704 achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server 1702.
  • the client terminal 1704 achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server 1702.
  • the client terminal 1704 achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server 1702.
  • the client terminal 1704 achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control
  • the first client terminal is synchronized with the server in the timeline.
  • the time of the client terminal and the time of the server are synchronized.
  • the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client terminal time point synchronized with the server.
  • each embodiment is progressively described, i. e. , each embodiment is described and focused on difference between embodiments. Similar and/or the same portions between various embodiments can be referred to with each other.
  • exemplary apparatus e. g. , a server, a client terminal
  • exemplary apparatus is described with respect to corresponding methods.
  • the disclosed methods, servers, client terminals and/or systems can be implemented in a suitable computing environment.
  • the disclosure can be described with reference to symbol (s) and step (s) performed by one or more computers, unless otherwise specified. Therefore, steps and/or implementations described herein can be described for one or more times and executed by computer (s) .
  • the term “executed by computer (s) ” includes an execution of a computer processing unit on electronic signals of data in a structured type. Such execution can convert data or maintain the data in a position in a memory system (or storage device) of the computer, which can be reconfigured to alter the execution of the computer as appreciated by those skilled in the art.
  • the data structure maintained by the data includes a physical location in the memory, which has specific properties defined by the data format.
  • the embodiments described herein are not limited. The steps and implementations described herein may be performed by hardware.
  • module can be software objects executed on a computing system.
  • a variety of components described herein including elements, modules, units, engines, and services can be executed in the computing system.
  • the disclosed methods, servers, and/or client terminals can be implemented in a software manner. Of course, the disclosed methods, servers, and/or client terminals can be implemented using hardware. All of which are within the scope of the present disclosure.
  • the disclosed modules can be configured in one apparatus (e. g. , a processing unit) or configured in multiple apparatus as desired.
  • the modules disclosed herein can be integrated in one module or in multiple modules.
  • Each of the modules disclosed herein can be divided into one or more sub-modules, which can be recombined in any manner.
  • suitable software and/or hardware may be included and used in the disclosed methods and systems.
  • the disclosed embodiments can be implemented by hardware only, which alternatively can be implemented by software products only.
  • the software products can be stored in a computer-readable storage medium including, e. g. , ROM/RAM, magnetic disk, optical disk, etc.
  • the software products can include suitable commands to enable a terminal device (e. g. , including a mobile phone, a personal computer, a server, or a network device, etc. ) to implement the disclosed embodiments.
  • a server sends a first message (i. e. , a speaking status of a first client side is switched from “waiting to speak” to “ready to speak” ) that is configured to indicate a participation in a first voice conversation to the first client side.
  • the first client side and the server are synchronized in a timeline.
  • the server notifies the first client side or all members participating in the first voice conversation (including the first client side) of a microphone handover time point corresponding to the first client side.
  • the microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives.
  • the microphone handover time point refers a time point designated by the server along the timeline.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Methods, servers, client terminals, and computer systems for controlling microphone order are provided. A server sends a first message that is configured to indicate a participation in a first voice conversation to the first client side. The server notifies the first client side or all members participating in the first voice conversation of a microphone handover time point corresponding to the first client side. Thus, the technical effect of more accurate control logic of the microphone order is achieved on the base of the separation of the control logic of the microphone order and the hard coding at the client side, so as to solve the technical problem that the microphone handover time point of the client side is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.

Description

METHODS AND SYSTEMS FOR CONTROLLING MICROPHONE ORDER
CROSS-REFERENCES TO RELATED APPLICATIONS
This application claims priority to Chinese Patent Application No.201310416694.2, filed on September 12, 2013, the entire content of which is incorporated herein by reference.
FIELD OF THE DISCLOSURE
The present disclosure generally relates to the field of Internet communication technology and, more particularly, relates to methods, servers, client terminals, and computer systems for controlling microphone order.
BACKGROUND
In current field of Internet communications, the multiplayer voice/video business is a social business based on multi media. In the multiplayer voice/video business, the control of a speaking order is an important capability. When there is no capability to control the speaking order, the speaking order can be very easily to be confused in situation of multiplayer voice/video applications. The user experiences can be affected.
Current voice/video chatting tools solve the speaking order problem by adopting a microphone order module in a client side. The two characteristics of the microphone order module are 1) the microphone takeover time length; 2) the microphone order adjusting function. The microphone takeover time length is usually configured to restrict the speaking time of a  member who is participating the voice conversation. For example, in some products, after the member participating the voice conversation obtains a speaking right, the corresponding user can obtain a microphone takeover time length as long as 60s. After the microphone takeover time length expires, the client side can be forced to exit the speaking status to signal the end of the speaking session. The microphone order adjusting function is usually configured to adjust the speaking order of members participating the voice conversation. For example, the adjusting function can be moving the speaking order of a member up, down, or to the top, etc.
However, the control logic of current microphone order module is usually completed at the client side, or the control logic of the microphone order is usually written into the hard coding at the client side. For example, for the microphone takeover time length, the common practice is fix the microphone takeover time length at the client side. As a result, client sides with same version have the same microphone takeover time lengths. The modification of the microphone takeover time length needs to wait until the release of a new version of the client side. For the microphone adjusting function, the common practice is to assign a separate command for each adjusting method as a preset instruction fixed at the client side.
In such approaches above, the microphone takeover time length and the microphone order control function are all tightly related to the client side version. Once a new version of client side is released, the microphone takeover time length and the functions that can support an administrator to adjust the microphone order are fixed already. Newer functions can be experienced by updating the client side to a newer version. As a result, when new function experiences are added to the microphone order module (e. g. , the prolonging of the microphone takeover time length, the new rules of the microphone order adjustment) , the corresponding experience can only be obtained by updating the client side, and the client side in old version can  show abnormal phenomenon. Thus, as disclosed, a framework needs to be proposed to achieve the control of the microphone order from the server side, so as to satisfy the requirement of experiencing new function designs under the microphone order module without updating the client side.
Partial of the above requirement has been achieved by current technology, e. g. , the control of microphone takeover time length can be performed by the server. More specifically, the server usually sends a microphone takeover time length to a client side. The client side starts to count down after receiving the microphone takeover time length. Thus, the client side can experience the new set of microphone takeover time length without updating to a new version. However, in this approach, because the time lengths of information transmission from the server to different client sides are different, the actual speaking time length at the client side often deviates from the microphone takeover time length notified by the server.
For example, as shown in the first voice conversation environment depicted in FIG. 1,  client sides  102, 104, 106, 108, and 110 are plurality of members in the voice conversation. A server 100 is configured to control these members. The information transmitting time length between the  client sides  102, 104, 106 and the server 100 is 0.8s, but the information transmitting time length between the  client sides  108, 110 and the server 100 is 2.8s. In this scenario, although the microphone takeover time lengths notified by the server to the  client sides  102 and 108 are the same, the  client sides  102 and 108 receive the notification in sequence. Because the server starts to count down the speaking time from the time the notification was sent out, the actual speaking time between the client side 102 and the client side 108 has 2s difference, which can result in a situation where the speaking is terminated before the speaking time displayed on the client side is over. Thus, under such approaches, the microphone  handover time for members participating the voice conversation is hard to control, and the user experience is also affected.
The disclosed method, apparatus and system are directed to solve one or more problems set forth above and other problems.
BRIEF SUMMARY OF THE DISCLOSURE
According to various embodiments, there is provided a method for controlling microphone order. In the method, a sever sends a first message that is configured to indicate a participation in a first voice conversation client side to a first client side when a speaking status of the first client side is switched from “waiting to speak” to “ready to speak” , wherein the first client side and the server are synchronized in a timeline. The server notifies the first client side or all members participating in the first voice conversation including the first client side of a microphone handover time point corresponding to the first client side, wherein the microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline.
According to various embodiments, there is provided a method for controlling microphone order. In the method, the client side receives a first message sent by a server that is configured to indicate a participation in a first voice conversation and switching the speaking status of the client side from “waiting to speak” to “ready to speak” based on the first message when a speaking status of a client side is switched from “waiting to speak” to “ready to speak” , wherein the client side and the server are synchronized in a timeline. The client side receives a  microphone handover time point notified by the server corresponding to the client side, and switching the speaking status of the client side from “ready to speak” to “waiting to speak” when the microphone handover time point of the client side arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline.
According to various embodiments, there is provided a server. In the server, a first sending unit is configured to send a first message that is configured to indicate a participation in a first voice conversation to a first client terminal when a speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” . The first client terminal and the server are synchronized in a timeline. A notifying unit is configured to notify the first client terminal or all members participating in the first voice conversation including the first client terminal of a microphone handover time point corresponding to the first client terminal. The microphone handover time point is configured to indicate that the speaking status of the first client terminal is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives. The microphone handover time point refers a time point designated by the server along the timeline.
According to various embodiments, there is provided a client terminal. In the client terminal, a first receiving unit is configured to receive a first message sent by a server that is configured to indicate a participation in a first voice conversation when a speaking status of a client terminal is switched from “waiting to speak” to “ready to speak” . The client terminal and the server are synchronized in a timeline. The first receiving unit is further configured to receive a microphone handover time point notified by the server corresponding to the client terminal. The microphone handover time point refers a time point designated by the server along the timeline. A switching unit is configured to switch the speaking status of the client terminal from  “waiting to speak” to “ready to speak” based on the first message. The switching unit is further configured to switch the speaking status of the client terminal from “ready to speak” to “waiting to speak” when the microphone handover time point of the client terminal arrives.
According to various embodiments, there is provided a system for controlling microphone order, including a server and other client terminals.
Other aspects or embodiments of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
The following drawings are merely examples for illustrative purposes according to various disclosed embodiments and are not intended to limit the scope of the present disclosure.
FIG. 1 depicts an exemplary voice conversation environment based on current technology;
FIG. 2 depicts an exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 3 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 4 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 5 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 6 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 7 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 8 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 9 depicts an exemplary server consistent with various disclosed embodiments;
FIG. 10 depicts another exemplary server consistent with various disclosed embodiments;
FIG. 11 depicts another exemplary server consistent with various disclosed embodiments;
FIG. 12 depicts another exemplary server consistent with various disclosed embodiments;
FIG. 13 depicts another exemplary server consistent with various disclosed embodiments;
FIG. 14 depicts another exemplary server consistent with various disclosed embodiments;
FIG. 15 depicts an exemplary client terminal consistent with various disclosed embodiments;
FIG. 16 depicts another exemplary client terminal consistent with various disclosed embodiments;
FIG. 17 depicts an exemplary computer system consistent with various disclosed embodiments;
FIG. 18 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 19 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments;
FIG. 20 depicts an exemplary environment incorporating certain disclosed embodiments; and
FIG 21 depicts an exemplary computer system consistent with the disclosed embodiments.
DETAILED DESCRIPTION
Reference will now be made in detail to exemplary embodiments of the disclosure, which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIGS. 1-19 depict exemplary methods, servers, client side, and computer systems for controlling microphone order. The exemplary methods, servers, client sides, and computer  systems can be implemented, for example, in an exemplary environment 2000 as shown in FIG. 20.
As shown in FIG. 20, the environment 2000 can include a server 2004, a client side 2006, and a communication network 2002. The server 2004 and the client side 2006 may be coupled through the communication network 2002 for information exchange, for example, Internet searching, webpage browsing, etc. Although only one client side 2006 and one server 2004 are shown in the environment 2000, any number of client sides 2006 or servers 2004 may be included, and other devices may also be included.
The communication network 2002 may include any appropriate type of communication network for providing network connections to the server 2004 and client side 2006 or among multiple servers 2004 or client sides 2006. For example, the communication network 2002 may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless.
A client side, as used herein, may refer to any appropriate client terminal device with certain computing capabilities including, for example, a personal computer (PC) , a work station computer, a notebook computer, a car-carrying computer (e. g. , carried in a car or other vehicles) , a server computer, a hand-held computing device (e. g. , a tablet computer) , a mobile terminal (e. g. , a mobile phone, a smart phone, an iPad, and/or an aPad) , a POS (i. e. , point of sale) device, or any other user-side computing device. Further, the terms “terminal” and “terminal device” can be used interchangeably. In certain embodiments, a client side may also refer to an application program running on the client terminal. A client terminal may run one or more client application programs.
A server, as used herein, may refer one or more server computers configured to provide certain server functionalities including, for example, search engines and database management. A server may also include one or more processors to execute computer programs in parallel.
The server 2004 and the client side 2006 may be implemented on any appropriate computing platform. FIG. 21 shows a block diagram of an exemplary computing system 2100 capable of implementing the server 2004 and/or the client side 2006. As shown in FIG. 21, the exemplary computer system 2100 may include a processor 2102, a storage medium 2104, a monitor 2106, a communication module 2108, a database 2110, peripherals 2112, and one or more bus 2114 to couple the devices together. Certain devices may be omitted and other devices may be included.
The processor 2102 can include any appropriate processor or processors. Further, the processor 2102 can include multiple cores for multi-thread or parallel processing. The storage medium 2104 may include memory modules, for example, ROM, RAM, and flash memory modules, and mass storages, for example, CD-ROM, U-disk, removable hard disk, etc. The storage medium 2104 may store computer programs for implementing various processes, when executed by the processor 2102.
Further, the peripherals 2112 may include I/Odevices, for example, keyboard and mouse, and the communication module 2108 may include network devices for establishing connections through the communication network 2002. The database 2110 may include one or more databases for storing certain data and for performing certain operations on the stored data, for example, webpage browsing, database searching, etc.
In operation, the client side 2006 may cause the server 2004 to perform certain actions, for example, an Internet search or other database operations. The server 2004 may be configured to provide structures and functions for such actions and operations. More particularly, the server 2004 may include a multi-user voice/video conference system for real-time voice/video communication. In various embodiments, a terminal, for example, a mobile terminal involved in the disclosed methods and systems can include the client side 2006.
It should be noted that in the specification, claims, and drawings of the present disclosure, terms such as "first" , "second" , "third" , "fourth" , etc. are used to distinguish similar objects, which are not necessarily used to describe specific order or priority. It is understood that data described by such terms can be used interchangeably under appropriate circumstances, so that, the present disclosure can include examples executed in orders other than the orders illustrated or described herein. In addition, terms "comprising" and "including" or any other variants thereof are intended to cover non-exclusive inclusion, such that the process, method, article, or apparatus containing a number of elements, but also other elements that are not expressly listed; or further include inherent elements of the process, method, article or apparatus. Without further restrictions, the statement “include a ……” does not exclude other elements included in the process, method, article, or apparatus having those elements.
FIG. 2 depicts an exemplary method for controlling microphone order consistent with various embodiments. As shown in FIG. 2, the method can include the following steps:
In Step 202, a server sends a first message that is configured to indicate a participation in a first voice conversation (i. e. , a speaking status of a first client side is switched from “waiting to speak” to “ready to speak” ) to the first client side. The first client side and the server are synchronized in a timeline (i. e. , a time axis or a real time sequence, etc. ) .
In Step 204, the server may notify the first client side or all members participating in the first voice conversation (including the first client side) of a microphone handover time point corresponding to the first client side. The microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives. The microphone handover time point refers a time point designated by the server along the timeline.
It is understood that one of the technical problems to be solved by the present disclosure is to provide a method to achieve a goal of more accurate control logic of the microphone order, on the base of the separation of the control logic of the microphone order and the hard coding at the client side. In order to achieve the goal, the present disclosure provides a method for controlling microphone order. More specifically, as an advantage of the present disclosure, the above method for controlling microphone order can be implemented in the same or similar application environment as the current method for controlling microphone order without the need to adjust the original framework.
For example, an exemplary implementation environment of this embodiment is depicted in FIG. 1, i. e. , a multiplayer voice conversation environment participated by other client sides of 102, 104, 106, 108, and 110 marked as a first voice conversation. The multiplayer voice conversation environment can be, but not limited to, a pure voice conversation environment, or a multiplayer interactive environment including the element of voice conversation (such as multiplayer video environment) .
A member participating the first voice conversation depicted in FIG. 1 is used as an example to provide a detailed description of the embodiment. For the convenience of the description, that member is marked as the first client side.
Based on the method for controlling microphone order provided by the disclosed embodiments, in Step 202, the server sends the message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” ) to the first client side. For the convenience of description, the message is marked as the first message.
In one embodiment, the first message is sent by the server to the first client side. The first message can be configured to indicate the speaking status of the first client side is switched from “waiting to speak” to “ready to speak, and/or, to indicate a switching time point of the speaking status of the first client side is switched from “waiting to speak” to “ready to speak.
When the first client side is at the status of waiting to speak, the voice information inputted by the user from the first client side is shielded by other members participating the first voice conversation. In other words, in the waiting-to-speak status, the voice information of the user using the first client side cannot be transmitted instantaneously to one or more users using other client sides in the first voice conversation. On the other hand, when the first client side is at the status of ready to speak, the voice information inputted by the user from the first client side can be received by other members participating the first voice conversation. More specifically, the voice information can be transmitted to other client sides by, but not limited to, the server, and/or by the first client side directly.
More specifically, in various embodiments, the first client side can have multiple ways to achieve the above waiting to speaking status and the ready to speak status. For example, , when the first client side is at the status of waiting to speak, the voice input apparatus corresponding to the first client side and/or the voice channel currently used by the first client side can be shielded.
When the first client side is at the status of ready to speak, the shielding of the above voice input apparatus and/or the voice channel can be lifted. In another embodiment, when the first client side is at the status of waiting to speak, voice information obtained from the first client side can be selected not to be transmitted. When the first client side is at the speaking status of ready to speak, the voice information obtained from the first client side can be transmitted to the outside of the first client side to be received by other members participating the first voice conversation.
Of course, these are just some examples of the disclosed embodiments. There is no limitation on specific implementation of the waiting to speak status of the first client side, the ready to speak status of the first client side, and the switching between these two statuses in the present disclosure.
Generally, the transmitting operation of the firs message of Step 202 can include the following two situations.
 (1) The server sends the first message to the first client side based on the control logic of the microphone order, or the server takes the initiative to notify the first client side to take over the microphone;
 (2) The server responds to the query request from the first client side, and sends the first message (i. e. , the microphone takeover time point, including the specific time point to instruct the first client side to switch from the waiting to speak status to the ready to speak status) to the first client side.
The first message may be sent by, but not limited to, other feasible ways. It is understood that the above implementing methods should be considered to be within the scope of the protection.
More specifically, in certain embodiments, the specific format of the first message can be an http (Hypertext Transfer Protocol) message. However, this does not mean a limitation to the present disclosure, e. g. , in certain embodiments, the format of the first message can be an ftp (File Transfer Protocol) messages, or other feasible requests meet the requirements of text transmitting format.
FIG. 3 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. The first client side can be synchronized with the server in advance, e. g. , as depicted in FIG. 3, before the Step 202 or the Step 204, the above method for controlling microphone order can includes:
Step 302: the server notifies the first client side of a client side time used for synchronization obtained from a server time and information transmitting time length between the server and the first client side.
Step 304: the server notifies each member of the corresponding client side time used for synchronization obtained from the server time and the information transmitting time length between the server and each member participating in the first voice conversation (including the first client side) .
In one embodiment, the server can synchronize with the client side after the client side builds a connection with the server. For example, in Step 302, the server can notify the first client side of the client side time used for synchronization. The client side time used for  synchronization can be obtained from the server time and the information transmitting time length between the server and the first client side.
For example, in one scenario, the information transmitting time length between the server and the first client side is detected to be 1.8s, i. e. , the delay of the time in transmitting the first message is 1.8s. Thus, the above client side time used for synchronization can be obtained by the following formula:
T1=T0+1.8,
T0 is the server time, and T1 is the client side time of the first client side.
In the above scenario, the client side time T1 in Step 302 is notified to the first client side. And the first client side receives the client side T1 after a 1.8s delay. At the same time that the first client side receives the notification, the server time T0 also increases by 1.8s. Thus, the time values of the client side and the server are the same at this moment, and the time of the client side and the time of the server are synchronized after this moment, i. e. the first client side is synchronized with the connected server.
Similarly, the information transmitting time length between the server and each member can be detected. The client side time corresponding to each member can be obtained based on similar formula, and be sent to corresponding member in Step 304. Thus, the synchronization of each member with the server can be achieved, and the more accurate control of the speaking time of the first client side can be achieved in combination with Step 204.
Of course, the above described methods are just certain examples for the first client side to synchronize with the server. In other embodiments, other feasible methods can be used to synchronize the first client side with the server.
FIG. 4 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. For example, as another example of implementing terminal synchronization, FIG. 4 depicts another exemplary implementation method. More specifically, as shown in FIG. 4, Step 204 can include the following steps.
In Step 402, the microphone handover time point designated by the server is calibrated or adjusted based on the time difference between the time of the server and the time of the client side, and the information transmitting time length between the server and the first client side;
In Step 404, the adjusted microphone handover time point is notified to the first client side or all members participating the first voice conversation (including the first client side) .
For example, in one scenario, the time difference of 2.6s is detected, i. e. , the time of the first client side is 2.6s behind the time of the server. The information transmitting time length of 1.8s is also detected, i. e. , the delay of the time in transmitting the first message is 1.8s. Thus, the above adjusted operation can be obtained by the following formula:
T3=T2-2.6+1.8,
T2 is the microphone handover time point designated by the server, and T3 is the adjusted microphone takeover time point.
It is understood that, in the above scenario, because the first client side is synchronized with the server, the first client side can be regarded as being synchronized with the server in the same timeline. Because the microphone handover time point may be an absolute  time point, the microphone handover time point can be regarded as a time point designated by the server on the timeline.
Based on the above described methods for controlling microphone order, in Step 204, the first client side or all members participating in the first voice conversation (including the first client side) can be notified of a microphone handover time point corresponding to the first client side with substantial accuracy.
Thus, according to disclosed s, neither the control method of writing the control logic of the microphone order into the hard coding at the client side in advance nor the control method of sending the pre-set microphone takeover time length to the client side is adopted. Instead, the absolute time point to indicate that the speaking status of the client side can be switched from the ready to speak to the waiting to speak (i. e. , the above microphone handover time point) is notified to the first client side. More specifically, the microphone handover time point can be a time point designated by the server.
As mentioned above, the first client side is synchronized with the server in the timeline. In another word, the time of the client side and the time of the server are synchronized. In this scenario, the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client side time point synchronized with the server. Thus, the client side can execute the microphone handover operation (i. e. , switch from the ready to speak status to the waiting to speak status) accurately based on the received microphone handover time point.
This method avoids the dependence on the hard coding at the client side to control the execution of the microphone handover operation of the first client side. This method also eliminates the interference on the accurate control of the microphone handover time point of the  first client side caused by the information transmitting time and other factors. Thus, this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client side, so as to solve the technical problem that the microphone handover time point of the client side is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.
Similarly, client side there is no limitation on the specific method of performing microphone handover operations by the first client side.
For example, similar to the above microphone takeover notification to the first client side by the server, the operation that the server notifies the first client side of the microphone handover time point can, but not limited to, be performed as that the server takes the initiative to execute the notifying operation based on the control logic, or be performed as that the server responds to the received query information sent by the first client side.
On the other hand, the above notification of the microphone handover time point can, but not limited to, be performed as that the server sends a message separately, or be performed as that the server adds the notification to other information sent to the first client side, or can be performed as that the server sends the notification with the first message at the same time, or can be performed as that the server adds the microphone takeover time point into the first message sent to the first client side.
Further, in the scenario where the server sends the notification of the microphone handover time point and the first message to the first client side separately, there is no limit on the order of how the notification and the first message are sent. More specifically, there is no restriction on the order of the execution of the Step 202 and Step 204. For example, in one  scenario, the server can notify the first client side of the microphone handover time point pre-calculated from the control logic before sending the first message of indicating the first client side to take over the microphone, so that the first client side can be notified of the preset microphone handover time point before the first client side take over the microphone.
In another scenario, the server can choose to send the first message to the first client side at first, and then notify the first client side of the microphone takeover time based on the sending time of the first message or based on the exact microphone takeover time point (the time point where the status of the first client side is switched from the waiting to speak to the ready to speak) received by the server, so as to have more accurate control of the duration of the ready to speak status of the first client side, i. e. , the speaking time.
FIG. 5 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. Considering the compatibility of the technology design of the current disclosure and the current control logic of the microphone order, and in order to achieve the goal of having quantitative control over the speaking time of the members participating the first voice conversation, as shown in FIG. 5, before the Step 204, the above method for controlling microphone order can further include the followings.
In Step 502, the server obtains the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client side. The microphone takeover time point refers a time point along the timeline when the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” .
As a part of the feasible control logic of the microphone order, in Step 502, the server can obtain the microphone handover time point based on a microphone takeover time  point and a preset microphone takeover time length corresponding to the first client side. More specifically, the microphone takeover time point can be a time point along the timeline when the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” . The preset microphone takeover time length can be the time length obtained by the server as a limitation to the speaking time for the members participating the first voice conversation. By setting a length for this time, the speaking time of the members can have a unified management, so as to provide better user experience for users of these client sides, and to provide more efficiency and better service for participants and managers of the voice conversation.
More specifically, the server can obtain the microphone handover time based on the following formula:
Toff=Ton+D,
Toff is the microphone handover time point, and Ton is the microphone takeover time point of the first client side obtained by the server based on the control logic of the microphone order. D is the preset microphone takeover time length.
In other embodiments, the microphone handover time point can be obtained by other methods based on the microphone takeover time point and the preset microphone takeover time length. For example, in certain embodiments, the above microphone handover time point can be calibrated or adjusted by the information transmitting time length needed between the server and the client side. On the other hand, the microphone handover time point can be obtained by other methods. For example, the microphone handover time point can, but not limited to, be set as a series of time points with a fixed time interval along the timeline of the server.
FIG. 6 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. Based on the above description, as depicted in FIG. 6, after Step 204, the method for controlling microphone order can further include the followings.
In Step 602, the server sends a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client side is switched from “waiting to speak” to “ready to speak” ) to the second client side when the server reaches the microphone handover time point. The second client side is located next to the first client side in the first sequence of members. The first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
In Step 604, the server notifies the second client side or all members participating in the first voice conversation (including the first client side and the second client side) of the microphone handover time point corresponding to the second client side.
As a feasible option of control logic, members who are waiting to speak can be sorted by a sequence of members. Thus, one of the members can own an exclusive privilege to speak at one time in the first voice conversation, so as to avoid the interruption caused by the speaking of other members at the same time. More specifically, the above member owning the speaking privilege can be the first member of the current sequence of members. Other members waiting to speak can be the second to the Nth member of the sequence of members. The speaking privilege can be owned one by one based on this order after the current member hands over the microphone. For the convenience of description, the sequence of members corresponding to members participating the first voice conversation can be marked as the first sequence of members.
It should be noted that, the first sequence of members can either include all members participating the first voice conversation, or include only members recorded in the server who are waiting to speak in the first voice conversation. More specifically, members who are waiting to speak can be members who request speaking privilege from the server. In another words, after the server receives the request for the speaking privilege, the server can mark the member who sends the request as a member waiting to speak, and record the order of members waiting to speak to form the first sequence of members. In the above scenario, members who have not sent request for speaking privilege may, but not limited to, not appear in the sequence of members.
Specifically, the implementation of the above control logic can be: setting the status of the member owing the speaking privilege currently as ready to speak, setting the status of other members waiting to speak as waiting to speak, and implementing the control of members participating the first voice conversation based the above method for controlling microphone order.
For example, in Step 602, the server sends a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client side is switched from “waiting to speak” to “ready to speak” ) to the second client side when the server reaches the microphone handover time point. The second client side is located next to the first client side in the first sequence of members. The specific implementation of the method that the server sends a microphone takeover instruction to the second client side can be similar to the first client side.
Similarly, in Step 604, the server can also notify the second client side or all members participating in the first voice conversation (including the first client side and the  second client side) of the microphone handover time point corresponding to the second client side, so as to achieve accurate control over the microphone takeover time point and the microphone handover time point of the second client side.
By analogy, through the methods for controlling microphone order provided by the embodiments, the accurate control over all members participating the first voice conversation can be achieved based on the preset control logic of the microphone order.
The specific format of the first sequence of members can be varied. Similarly, the control logic of the microphone order can also vary. For example, the server can only record another member obtained through an election or other feasible selection mechanism besides the member owing the speaking privilege currently, marked as grabbing success member. After the current speaking member hands over the microphone, the server can, but not limited to, grant the speaking privilege to the grabbing success member.
It should be noted that, as shown in the above embodiments, based on the method for controlling microphone order provided by the disclosed embodiments, by accurate control of the microphone takeover time point and the microphone handover time point of the client side, the above method for controlling microphone order does not rely on the control logic of the microphone order provides the necessary condition for achieving the control logic of the microphone order.
FIG. 7 depicts another exemplary method for controlling microphone order consistent with various disclosed As depicted in FIG. 7, the above method for controlling microphone order can include the followings.
In Step 702, the server sends member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation. The first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
In Step 704, the server sends the member ID information and the destination order information of a member who needs microphone order adjustment in a second sequence of members to one or more members participating in a second voice conversation. The second sequence of members is configured to indicate the speaking order of members participating in the second voice conversation.
Based on the disclosed method for controlling microphone order, in Step 702, member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members can be sent to one or more members participating in the first voice conversation. Specifically, the member who needs microphone order adjustment can be the adjusting member specified in the news indicating the adjusted speaking order obtained by the server. For example, this news can include the following information: (ID of adjusting member, adjusted position after adjustment) .
The ID of adjusting member can be a 32bit number, configured to represent member ID information. The adjusted position after adjustment can be a 16bit number, configured to represent the destination order information.
In the above scenario, by receiving the member ID information and destination order information sent by the server, the members of the first voice conversation can further timely update the microphone order of the current conversation at the local client side, and  execute feasible processing operations based on updated microphone order, or display updated microphone order on the display device for the user to view the microphone order of the current conversation.
On the other hand, because both the member ID information and the destination order information of the member who needs microphone order adjustment are information reflecting the goal of the microphone order adjustment or reflecting the facts of the results, and unrelated to the preset instruction or the adjusting logic of the microphone order. For example, in previous example, the adjusting logic of the microphone order can be moving one member up for 1 position in one operation. In the updated adjusting logic of the microphone order, the same operation or other operation can be moving one member up for 2 positions. Other methods may also be used.
It should be noted that, there is no necessary specific order between Step 702 or Step 704 and other steps of the above method for controlling microphone order. For example, as depicted in FIG. 7, Step 702 and Step 704 can be executed after Step 204, or before Step 202, or between Step 202 and Step 204. These variations have no effect on the implementation of the technology design of the current disclosure.
As can be seen from the above descriptions, a method of notifying the client side of the member ID information and destination order information is provided. This method achieves the goal of notifying the client side of the adjusting information of the microphone order. More specifically, the server does not send instructions corresponding to the adjusting operation of the microphone order to the client side. Instead, the server sends the member ID information and the destination order information of the member who needs microphone order adjustment to the client side directly.
This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client side has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client side.
In addition, the server only needs to notify the client side of the adjusting member under the microphone order adjusting function, and does not need to notify each client side of members who are adjusted passively or the entire updated sequence of members. Thus, the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
On the other hand, the server configured to monitor the first voice conversation can also be configured to monitor the second voice conversation. For example, in Step 704, the above server can also send the member ID information and the destination order information of a member who needs microphone order adjustment in a second sequence of members to one or more members participating in the second voice conversation. More specifically, the server can control the microphone takeover time point and the microphone handover time point of the members participating the first voice conversation, the server can also keep controlling the microphone order adjustment of members participating the second voice conversation.
FIG. 8 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. As shown in FIG. 8, before Step 702 or Sep 704, the above method for controlling microphone order can further include the followings.
In Step 802, the server receives a preset instruction corresponding to a microphone order adjusting operation. The microphone order adjusting operation includes at  least one of the followings: moving up of the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs microphone order adjustment to position N.
In Step 804, the server parses the received preset instruction to obtain corresponding microphone order adjusting operation, and obtains the adjusted speaking order of the member who needs microphone order adjustment based on the parsed microphone order adjusting operation as the destination order information.
The microphone order adjusting operations occurred frequently in the control logic of the microphone order can include: moving up of the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs microphone order adjustment to position N, and N>=1. By achieving the control logic of the microphone order and by packing the corresponding adjusting operation of the microphone order, the technology effect of providing convenient functional interface for the users can be achieved.
Through Step 802 and Step 804, and in combination with Step 702 or Step 704, the same technology effect can be achieved. For example, in Step 802, the server can receive the preset instruction corresponding to the adjusting operation of the microphone order. The preset instruction can have the format of: (Operation command, ID of adjusting member) .
The operation command can be a 16bit number. Different adjusting operations of the microphone order correspond to different numbers. The ID of adjusting member can be a 32bit number, configured to represent the ID information of the member who needs microphone  order adjustment, i. e. , the ID of adjusting member can be parsed to the ID information of the above member.
For example, in one scenario, the operation command "402" can represent the adjustment of moving a member upward. Thus, the server can parse this preset instruction further in Step 804 to obtain the adjusted order of the member. For example, the 5th position before the adjustment can be adjusted to 4th position. And then in Step 702 or Step 704, the 4th position becomes the destination order information of the member and is sent to the client side with the member ID information.
FIG. 9 depicts an exemplary server consistent with various disclosed embodiments. As depicted in FIG. 9, the server includes a first sending unit 902 and a notifying unit 904.
The first sending unit 902 is configured to send a first message that is configured to indicate a participation in a first voice conversation (i. e. , a speaking status of a first client terminal is switched from “waiting to speak” to “ready to speak” ) to the first client terminal. The first client terminal and the server are synchronized in a timeline.
The notifying unit 904 is configured to notify the first client terminal or all members participating in the first voice conversation (including the first client terminal) of a microphone handover time point corresponding to the first client terminal. The microphone handover time point is configured to indicate that the speaking status of the first client terminal is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives. The microphone handover time point refers a time point designated by the server along the timeline.
It is understood that one of the technical problems to be solved by the present disclosure is to provide a server to achieve a goal of more accurate control logic of the microphone order, on the base of the separation of the control logic of the microphone order and the hard coding at the client terminal. In order to achieve the goal, the present disclosure provides a server. More specifically, as an advantage of the present disclosure, the above server can be implemented in the same or similar application environment as the current technology without the need to adjust the original framework.
For example, an exemplary implementation environment of this embodiment is depicted in FIG. 1, i. e. , a multiplayer voice conversation environment participated by other client terminals of 102, 104, 106, 108, and 110 marked as a first voice conversation. The multiplayer voice conversation environment can be, but not limited to, a pure voice conversation environment, or a multiplayer interactive environment including the element of voice conversation (such as multiplayer video environment) .
A member participating the first voice conversation depicted in FIG. 1 is used as an example to provide a detailed description of the embodiment. For the convenience of the description, that member is marked as the first client terminal.
Based on the server provided by the disclosed embodiment, the first sending unit 902 con be configured to send the message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” ) to the first client side. For the convenience of description, the message is marked as the first message.
The first message is sent by the first sending unit 902 to the first client terminal. The first message can be configured to indicate the speaking status of the first client terminal is  switched from “waiting to speak” to “ready to speak, and/or, to indicate a switching time point of the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak.
When the first client terminal is at the status of waiting to speak, the voice information inputted by the user from the first client terminal is shielded by other members participating the first voice conversation. In other words, in the waiting-to-speak status, the voice information of the user using the first client terminal cannot be transmitted instantaneously to one or more users using other client terminals in the first voice conversation. On the other hand, when the first client terminal is at the status of ready to speak, the voice information inputted by the user from the first client terminal can be received by other members participating the first voice conversation. More specifically, the voice information can be transmitted to other client terminals by, but not limited to, the server, and/or by the first client terminal directly.
More specifically, in various embodiments, the first client terminal can have multiple ways to achieve the above waiting to speaking status and the ready to speak status. For example, in one embodiment, when the first client terminal is at the status of waiting to speak, the voice input apparatus corresponding to the first client terminal and/or the voice channel currently used by the first client terminal can be shielded. When the first client terminal is at the status of ready to speak, the shielding of the above voice input apparatus and/or the voice channel can be lifted. In another embodiment, when the first client terminal is at the status of waiting to speak, voice information obtained from the first client terminal can be selected not to be transmitted. When the first client terminal is at the speaking status of ready to speak, the voice information obtained from the first client terminal can be transmitted to the outside of the first client terminal to be received by other members participating the first voice conversation.
Of course, these are just some examples of the disclosed embodiments. There is no limitation on specific implementation of the waiting to speak status of the first client terminal, the ready to speak status of the first client terminal, and the switching between these two statuses in the present disclosure.
Generally, the transmitting operation of the firs message of the first sending unit 902 can include the following two situations:
1) The first sending unit 902 sends the first message to the first client terminal based on the control logic of the microphone order, or the server take the initiative to notify the first client terminal to take over the microphone;
2) The server responds to the query request from the first client terminal, through the first sending unit 902, the first message (i. e. , the microphone takeover time point, including the specific time point to instruct the first client terminal to switch from the waiting to speak status to the ready to speak status) is sent to the first client terminal.
Of course, the disclosed embodiment can include, but not limited to, other feasible ways to send the first message. More specifically, in certain embodiments, the specific format of the first message can be an http (Hypertext Transfer Protocol) message. However, this does not mean a limitation to the present disclosure, e. g. , in certain embodiments, the format of the first message can be an ftp (File Transfer Protocol) messages, or other feasible requests meet the requirements of text transmitting format.
FIG. 10 depicts another exemplary server consistent with various disclosed embodiments. The first client terminal can be synchronized with the server in advance, e. g. , as depicted in FIG. 10, the above server can further include a synchronizing unit 1002.
The synchronizing unit 1002 is configured to notify the first client terminal of a client terminal time used for synchronization obtained from a server time and information transmitting time length between the server and the first client terminal. Or, the synchronizing unit 1002 is configured to notify each member of the corresponding client terminal time used for synchronization obtained from the server time and the information transmitting time length between the server and each member participating in the first voice conversation (including the first client terminal) .
The server can synchronize with the client terminal after the client terminal builds a connection with the server. For example, the synchronizing unit 1002 can notify the first client terminal of the client terminal time used for synchronization. The client terminal time used for synchronization can be obtained from the server time and the information transmitting time length between the server and the first client terminal.
For example, in one scenario, the information transmitting time length between the server and the first client terminal is detected to be 1.8s, i. e. , the delay of the time in transmitting the first message is 1.8s. Thus, the above client terminal time used for synchronization can be obtained by the following formula:
T1=T0+1.8,
T0 is the server time, and T1 is the client terminal time of the first client terminal.
In above scenario, the client terminal time T1 in step synchronizing unit 1002 is notified to the first client terminal. And the first client terminal receives the client terminal T1 after a 1.8s delay. At the same time that the first client terminal receives the notification, the server time T0 also increases by 1.8s. Thus the time values of the client terminal and the server  are the same at this moment, and the time of the client terminal and the time of the server are synchronized after this moment, i. e. the first client terminal is synchronized with the connected server.
Similarly, the information transmitting time length between the server and each member can be detected. The client terminal time corresponding to each member can be obtained based on similar formula, and be sent to corresponding member in step synchronizing 1002. Thus, the synchronization of each member with the server can be achieved, and the more accurate control of the speaking time of the first client terminal can be achieved in combination with notifying unit 904.
Of course, the above described methods are just certain examples the first client terminal to synchronize with the server. In other embodiment, other feasible method can be used to synchronize the first client terminal with the server.
FIG. 11 depicts another exemplary server consistent with various disclosed embodiments, as another example of implementing terminal synchronization. More specifically, as shown in FIG. 11, the notifying unit 904 can include a calibrating module 1102 and a notifying module 1104.
The calibrating module 1102 is configured to calibrate the microphone handover time point designated by the server based on the time difference between the time of the server and the time of the client terminal, and the information transmitting time length between the server and the first client terminal.
The notifying module 1104 is configured to notify the adjusted microphone handover time point to the first client terminal or all members participating the first voice conversation (including the first client terminal) .
For example, in another scenario, the time difference of 2.6s is detected, i. e. , the time of the first client terminal is 2.6s behind the time of the server. The information transmitting time length of 1.8s is also detected, i. e. , the delay of the time in transmitting the first message is 1.8s. Thus, the calibrating module 1102 can execute the calibration based on the following formula:
T3=T2-2.6+1.8,
T2 is the microphone handover time point designated by the server, and T3 is the adjusted microphone takeover time point.
It is understood that, in the above scenario, because the first client terminal is synchronized with the server, the first client terminal can be regarded as being synchronized with the server in the same timeline. Because the microphone handover time point may be an absolute time point, the microphone handover time point can be regarded as a time point designated by the server on the timeline. Based on the above described server, in notifying unit 904, the first client terminal or all members participating in the first voice conversation (including the first client terminal) can be notified of a microphone handover time point corresponding to the first client terminal.
Thus, distinctive from the current technology, neither the control method of writing the control logic of the microphone order into the hard coding at the client terminal in advance, nor the control method of sending the pre-set microphone takeover time length to the  client terminal, are adopted. Instead, the absolute time point to indicate that the speaking status of the client terminal can be switched from the ready to speak to the waiting to speak (i. e. , the above microphone handover time point) is notified to the first client terminal. More specifically, the microphone handover time point can be a time point designated by the server.
As mentioned above, the first client terminal is synchronized with the server in the timeline. In another word, the time of the client terminal and the time of the server are synchronized. In this scenario, the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client terminal time point synchronized with the server.
Thus, the client terminal can execute the microphone handover operation (i. e. , switch from the ready to speak status to the waiting to speak status) accurately based on the received microphone handover time point. This method avoids the dependence on the coding at the client terminal to control the execution of the microphone handover operation of the first client terminal. This method also eliminates the interference on the accurate control of the microphone handover time point of the first client terminal caused by the information transmitting time and other factors. Thus, this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client terminal, so as to solve the technical problem that the microphone handover time point of the client terminal is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the coding at the client terminal in current technology.
Similarly, there is no limitation on the specific method of performing microphone handover operations by the first client terminal.
For example, similar to the above microphone takeover notification to the first client terminal by the server, the operation that the server notifies the first client terminal of the microphone handover time point can, but not limited to, be performed as that the server takes the initiative to execute the notifying operation based on the control logic, or be performed as that the server responds to the received query information sent by the first client terminal.
On the other hand, the above notification of the microphone handover time point can, but not limited to, be performed as that the server sends a message separately, or be performed as that the server adds the notification to other information sent to the first client terminal, or can be performed as that the server sends the notification with the first message at the same time, or can be performed as that the server adds the microphone takeover time point into the first message sent to the first client terminal.
Further, in the scenario where the server sends the notification of the microphone handover time point and the first message to the first client terminal separately, there is no limit on the order of how the notification and the first message are sent. More specifically, there is no restriction on the order of the execution of the first sending unit 902 and the notifying unit 904. For example, in one scenario, the server can notify the first client terminal of the microphone handover time point pre-calculated from the control logic before sending the first message of indicating the first client terminal to take over the microphone, so that the first client terminal can be notified of the preset microphone handover time point before the first client terminal take over the microphone. In another scenario, the server can choose to send the first message to the first client terminal at first, and then notify the first client terminal of the microphone takeover time based on the sending time of the first message or based on the exact microphone takeover time point (the time point where the status of the first client terminal is switched from the  waiting to speak to the ready to speak) received by the server, so as to have more accurate control of the duration of the ready to speak status of the first client terminal, i. e. , the speaking time.
FIG. 12 depicts another exemplary server consistent with various disclosed embodiments. Considering the compatibility of the technology design of the current disclosure and the current control logic of the microphone order, and in order to achieve the goal of having quantitative control over the speaking time of the members participating the first voice conversation, as shown in FIG. 12, in coupling with the notifying unit 904, the above server can further include an obtaining unit 1202.
The obtaining unit 1202 is configured to obtain the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client terminal. The microphone takeover time point refers a time point along the timeline when the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” .
As a part of the feasible control logic of the microphone order, the obtaining unit 1202 can obtain the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client terminal. More specifically, the microphone takeover time point can be a time point along the timeline when the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” . The preset microphone takeover time length can be the time length obtained by the server as a limitation to the speaking time for the members participating the first voice conversation. By setting a length for this time, the speaking time of the members can have a unified management, so as to provide better user experience for users of these client terminals,  and to provide more efficiency and better service for participants and managers of the voice conversation.
More specifically, the obtaining unit 1202 can obtain the microphone handover time based on the following formula:
Toff=Ton+D,
Toff is the microphone handover time point, and Ton is the microphone takeover time point of the first client terminal obtained by the server based on the control logic of the microphone order. D is the preset microphone takeover time length.
In other embodiments, the microphone handover time point can be obtained by other methods based on the microphone takeover time point and the preset microphone takeover time length. For example, in certain embodiments, the above microphone handover time point can be calibrated or adjusted by the information transmitting time length needed between the server and the client terminal. On the other hand, the microphone handover time point can be obtained by other methods too. For example, the microphone handover time point can, but not limited to, be set as a serial of time points with a fixed time interval along the timeline of the server.
The first sending unit 902 can further be configured to send a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client terminal is switched from “waiting to speak” to “ready to speak” ) to the second client terminal when the server reaches the microphone handover time point. The second client terminal is located next to the first client terminal in the first sequence of members. The  first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation.
The notifying unit can further be configured to notify the second client terminal or all members participating in the first voice conversation (including the first client terminal and the second client terminal) of the microphone handover time point corresponding to the second client terminal.
As a feasible option of control logic, members who are waiting to speak can be sorted by a sequence of members. Thus, one of the members can own an exclusive privilege to speak at one time in the first voice conversation, so as to avoid the interruption caused by the speaking of other members at the same time. More specifically, the above member owning the speaking privilege can be the first member of the current sequence of members. Other members waiting to speak can be the second to the Nth member of the sequence of members. The speaking privilege can be owned one by one based on this order after the current member hands over the microphone. For the convenience of description, the sequence of members corresponding to members participating the first voice conversation can be marked as the first sequence of members.
It should be noted that, the first sequence of members can either include all members participating the first voice conversation, or include only members recorded in the server who are waiting to speak in the first voice conversation. More specifically, members who are waiting to speak can be members who request speaking privilege from the server. In another words, after the server receives the request for the speaking privilege, the server can mark the member who sends the request as a member waiting to speak, and record the order of members waiting to speak to form the first sequence of members. In the above scenario, members who  have not sent request for speaking privilege may, but not limited to, not appear in the sequence of members.
Specifically, the implementation of the above control logic can be: setting the status of the member owing the speaking privilege currently as ready to speak, setting the status of other members waiting to speak as waiting to speak, and implementing the control of members participating the first voice conversation based the above method for controlling microphone order.
For example, the first sending unit 902 can send a second message that is configured to indicate the participation in the first voice conversation (i. e. , the speaking status of a second client terminal is switched from “waiting to speak” to “ready to speak” ) to the second client terminal when the server reaches the microphone handover time point. The second client terminal is located next to the first client terminal in the first sequence of members. The specific implementation of the method that the first sending unit 902 sends a microphone takeover instruction to the second client terminal can be similar to the first client terminal.
Similarly, the notifying unit 904 can further notify the second client terminal or all members participating in the first voice conversation (including the first client terminal and the second client terminal) of the microphone handover time point corresponding to the second client terminal, so as to achieve accurate control over the microphone takeover time point and the microphone handover time point of the second client terminal.
By analogy, through the server provided by the disclosed embodiments, the accurate control over all members participating the first voice conversation can be achieved based on the preset control logic of the microphone order.
The specific format of the first sequence of members can be varied. Similarly, the control logic of the microphone order can also have other implementations. For example, the server can only record another member obtained through an election or other feasible selection mechanism besides the member owing the speaking privilege currently, marked as grabbing success member. After the current speaking member hands over the microphone, the server can, but not limited to, grant the speaking privilege to the grabbing success member.
It should be noted that, as shown in the above embodiments, based on the server provided by the disclosed embodiments, by accurate control of the microphone takeover time point and the microphone handover time point of the client terminal, the above method for controlling microphone order does not rely on the control logic of the microphone order provides the necessary condition for achieving the control logic of the microphone order.
FIG. 13 depicts another exemplary server consistent with various disclosed embodiments. As depicted in FIG. 13, in another embodiment, the above server can include a second sending unit 1302.
The second sending unit 1302 is configured to send member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation, or to send the member ID information and the destination order information of a member who needs order microphone adjustment in a second sequence of members to one or more members participating in a second voice conversation. The first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation. The second sequence of members is configured to indicate the speaking order of members participating in the second voice conversation.
Based on the server provided by the disclosed embodiment, the second sending unit 1302 can send member ID and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation. Specifically, the member who needs order microphone adjustment can be the adjusting member specified in the news indicating the adjusted speaking order obtained by the server. For example, this news can include the following information:
 (ID of adjusting member, adjusted position after adjustment) 
ID of adjusting member can be a 32bit number, configured to represent member ID information. The adjusted position after adjustment can be a 16bit number, configured to represent the destination order information.
In the above scenario, by receiving the member ID information and destination order information sent by the second sending unit 1302, the members of the first voice conversation can further timely update the microphone order of the current conversation at the local client terminal, and execute feasible processing operations based on updated microphone order, or display updated microphone order on the display device for the user to view the microphone order of the current conversation.
On the other hand, because both the member ID information and the destination order information of the member who needs microphone order adjustment are information reflecting the goal of the microphone order adjustment or reflecting the facts of the results, and unrelated to the preset instruction or the adjusting logic of the microphone order. For example, in previous example, the adjusting logic of the microphone order can be moving one member up for 1 position in one operation. In the updated adjusting logic of the microphone order, the same  operation or other operation can be moving one member up for 2 positions. Other methods may also be used.
As can be seen from the above description, method of notifying the client terminal of the member ID information and destination order information is adopted. This method achieves the goal of notifying the client terminal of the adjusting information of the microphone order. More specifically, the server does not send instructions corresponding to the adjusting operation of the microphone order to the client terminal. Instead, the server sends the member ID information and the destination order information of the member who needs microphone order adjustment to the client terminal directly. This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client terminal has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client terminal.
In addition, the server only needs to notify the client terminal of the adjusting member under the microphone order adjusting function, and does not need to notify each client terminal of members who are adjusted passively or the entire updated sequence of members. Thus, the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
On the other hand, the server configured to monitor the first voice conversation can also be configured to monitor the second voice conversation. For example, the above server can also send, through the second sending unit 1302, the member ID information and the destination order information of a member who needs microphone order adjustment in a second  sequence of members to one or more members participating in the second voice conversation. More specifically, the above server can control the microphone takeover time point and the microphone handover time point of the members participating the first voice conversation, the server can also keep controlling the microphone order adjustment of members participating the second voice conversation. FIG. 13 depicts another exemplary server consistent with various disclosed embodiments. Further, considering the compatibility of the technology design of the disclosure and the control logic of the microphone order, as depicted in FIG. 14, coupling with the second sending unit 1302, the above server can further include a receiving unit 1402 and a parsing unit 1404.
The receiving unit 1402 is configured to receive a preset instruction corresponding to a microphone order adjusting operation. The microphone order adjusting operation includes at least one of the followings: moving up of the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs microphone order adjustment to position N, and N is an integer and N>=1.
The parsing unit 1404 is configured to parse the received preset instruction to obtain corresponding microphone order adjusting operation, and obtains the adjusted speaking order of the member who needs microphone order adjustment based on the parsed microphone order adjusting operation as the destination order information.
The microphone order adjusting operations occurred frequently in the control logic of the microphone order can include: moving up of the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs  microphone order adjustment to position 2. By achieving the control logic of the microphone order and by packing the corresponding adjusting operation of the microphone order, the technology effect of providing convenient functional interface for the users can be achieved.
Thus, through the receiving unit 1402 and the parsing unit 1404, and in combination with the above second sending unit 1302, same technical effect can be achieved. For example, the receiving unit 1402 can receive the preset instruction corresponding to the adjusting operation of the microphone order. The preset instruction can have the following format:
 (Operation command, ID of adjusting member) 
Operation command can be a 16bit number. Different adjusting operations of the microphone order correspond to different numbers. The ID of adjusting member can be a 32bit number, configured to represent the ID information of the member who needs microphone order adjustment, i. e. , the ID of adjusting member can be parsed to the ID information of the above member.
For example, in one scenario, the operation command "402" can represent the adjustment of moving a member upward. Thus, the parsing unit 1404 can parse this preset instruction further in Step 804 to obtain the adjusted order of the member. For example, the 5th position before the adjustment can be adjusted to 4th position. And then in the second sending unit 1302, the 4th position becomes the destination order information of the member and is sent to the client terminal with the member ID information.
FIG. 18 depicts another exemplary method for controlling microphone order consistent with various embodiments. As shown in FIG. 18, the method can include the following steps:
In Step 1802, the client side receives a third message (sent by a server) that is configured to indicate a participation in a third voice conversation (i. e. , a speaking status of a client side is switched from “waiting to speak” to “ready to speak” ) , and switches the speaking status of the client side from “waiting to speak” to “ready to speak” based on the third message. The client side and the server are synchronized in a timeline.
In Step 1804, the client side receives a microphone handover time point notified by the server corresponding to the client side, and switches the speaking status of the client side from “ready to speak” to “waiting to speak” when the microphone handover time point of the client side arrives. The microphone handover time point refers a time point designated by the server along the timeline.
Because the client side receives the microphone handover time point from the server (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction received from the server, the client side achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server. On one hand, by receiving the microphone handover time point, the goal of the separation of the control logic of the microphone order and the hard coding at the client side is achieved. On the other hand, by using the unity of the absolute time, the accurate control over the microphone handover time point of the client side is achieved. Thus, the first client side is synchronized with the server in the timeline. In another word, the time of the client side and the time of the server are  synchronized. In this scenario, the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client side time point synchronized with the server. Thus, this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client side, so as to solve the technical problem that the microphone handover time point of the client side is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.
FIG. 19 depicts another exemplary method for controlling microphone order consistent with various disclosed embodiments. As depicted in FIG. 19, the above method for controlling microphone order can further including the followings.
In Step 1902, the client side receives a member ID information and a destination order information (sent by the server) of a member who needs microphone order adjustment in a third sequence of members. The third sequence of members is configured to indicate the speaking order of members participating in the third voice conversation.
In Step 1904, the client side parses the member ID and the destination order information, and obtains the adjusted third sequence of members.
Thus, by receiving the client side of the member ID information and destination order information, this method achieves the goal of notifying the client side of the adjusting information of the microphone order. More specifically, the client side does not receive instructions corresponding to the adjusting operation of the microphone order from the server. Instead, the client side receives the member ID information and the destination order information of the member who needs microphone order adjustment directly. This method avoids the  dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client side has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client side.
In addition, the client side only needs to receive the adjusting member under the microphone order adjusting function, and does not need to receive members who are adjusted passively or the entire updated sequence of members. Thus, the pressure of data transmission can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
FIG. 15 depicts an exemplary client terminal consistent with various embodiments. As shown in FIG. 15, the client terminal can include a first receiving unit 1502 and a switching unit 1504.
the first receiving unit 1502 is configured to receive a third message (sent by a server) that is configured to indicate a participation in a third voice conversation (i. e. , a speaking status of a client terminal is switched from “waiting to speak” to “ready to speak” ) . The client terminal and the server are synchronized in a timeline. The first receiving unit is further configured to receive a microphone handover time point notified by the server corresponding to the client terminal. The microphone handover time point refers a time point designated by the server along the timeline.
The switching unit 1504 is configured, coupling with the above first receiving unit 1502, to switch the speaking status of the client terminal from “waiting to speak” to “ready to speak” based on the third message. The switching unit is further configured to switch the  speaking status of the client terminal from “ready to speak” to “waiting to speak” when the microphone handover time point of the client terminal arrives.
Because the client terminal receives the microphone handover time point from the server (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction received from the server, the client terminal achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server. On one hand, by receiving the microphone handover time point, the goal of the separation of the control logic of the microphone order and the hard coding at the client terminal is achieved. On the other hand, by using the unity of the absolute time, the accurate control over the microphone handover time point of the client terminal is achieved.
Thus, the first client terminal is synchronized with the server in the timeline. In another word, the time of the client terminal and the time of the server are synchronized. In this scenario, the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client terminal time point synchronized with the server. Thus, through the technology design of the disclosure, this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client terminal, so as to solve the technical problem that the microphone handover time point of the client terminal is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.
FIG. 16 depicts another exemplary client side consistent with various embodiments. As shown in FIG. 16, the client terminal can further include a second receiving unit 1602 and a second parsing unit 1604.
The second receiving unit 1602 is configured to receive a member ID information and a destination order information (sent by the server) of a member who needs microphone order adjustment in a third sequence of members. The third sequence of members is configured to indicate the speaking order of members participating in the third voice conversation.
The second parsing unit 1604 is configured to parse the member ID and the destination order information, and to obtain the adjusted third sequence of members.
Thus, by receiving the client terminal of the member ID information and destination order information, this method achieves the goal of notifying the client terminal of the adjusting information of the microphone order. More specifically, the client terminal does not receive instructions corresponding to the adjusting operation of the microphone order from the server. Instead, the client terminal receives the member ID information and the destination order information of the member who needs microphone order adjustment directly. This method avoids the dependence on the hard coding to parse the preset instructions, solves the problem that the hard coding at the client terminal has to be updated after control logic of the microphone order (related to the adjusting function of the microphone order) is updated. Thus, this method further achieves the technical effect of the separation of the control logic of the microphone order and the hard coding at the client terminal.
In addition, the client terminal only needs to receive the adjusting member under the microphone order adjusting function, and does not need to receive members who are adjusted passively or the entire updated sequence of members. Thus, the pressure of data transmission  can be maintained at a relatively low level, so as to achieve the technology effect of improving the using efficiency of the computer Internet.
FIG. 17 depicts an exemplary computer system consistent with various disclosed embodiments. The system includes a server 1702 and other client terminal 1704.
Other client terminals 1704 are connected with the server 1702. The client terminals or client sides are members participating the same voice conversation.
As depicted in FIG. 17, a server 1702 and other client terminals 1704 forms a computer system similar to the client-server structure. The server 1702 can be any server consistent with various disclosed embodiments. One or more of the plurality of client terminals 1704 can be any client terminals consistent with various disclosed embodiments.
The server 1702 can send information including the microphone takeover indication through the first sending unit 902 to one of client terminal 1704 owing the speaking privilege currently, i. e. , a member of the voice conversation. Then, this member can receive message through the second receiving unit 1502, and finish the microphone takeover operation through the switching unit 1504 (switching the waiting to speak status to the ready to speak status) based on the instruction from the server 1702.
On the other hand, the server 1702 can notify the member of the microphone handover time point through the notifying unit 904. The member can receive the microphone handover time point through the first receiving unit 1502. When the microphone handover time point arrives, the member can finish the microphone handover operation through the switching unit 1504 (switching the ready to speak status to the waiting to speak status) .
In above scenario, because the server 1702 sends the microphone handover time point to the client terminals 1704 (i. e. , a time point designated by the server in the timeline, or the absolute time point) , combining with the microphone takeover instruction sent from the server 1702, the client terminal 1704 achieves accurate switching between the ready to speak status to the waiting to speak status based on the preset control logic and the synchronization with the server 1702. On one hand, by sending the microphone handover time point, the goal of the separation of the control logic of the microphone order and the coding at the client terminal 1704 is achieved. On the other hand, by using the unity of the absolute time, the accurate control over the microphone handover time point of the client terminal 1704 is achieved. Thus, the first client terminal is synchronized with the server in the timeline. In another word, the time of the client terminal and the time of the server are synchronized. In this scenario, the microphone handover time point designated by the server can be considered as a server time point designated by the server, as well as a client terminal time point synchronized with the server. Thus, through the technology design of the disclosure, this method can achieve the technical effect of more accurate control logic of the microphone order on the base of the separation of the control logic of the microphone order and the hard coding at the client terminal, so as to solve the technical problem that the microphone handover time point of the client terminal is hard to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client terminal in current technology.
It should be noted that, in the present disclosure each embodiment is progressively described, i. e. , each embodiment is described and focused on difference between embodiments. Similar and/or the same portions between various embodiments can be referred to  with each other. In addition, exemplary apparatus (e. g. , a server, a client terminal) is described with respect to corresponding methods.
The disclosed methods, servers, client terminals and/or systems can be implemented in a suitable computing environment. The disclosure can be described with reference to symbol (s) and step (s) performed by one or more computers, unless otherwise specified. Therefore, steps and/or implementations described herein can be described for one or more times and executed by computer (s) . As used herein, the term “executed by computer (s) ” includes an execution of a computer processing unit on electronic signals of data in a structured type. Such execution can convert data or maintain the data in a position in a memory system (or storage device) of the computer, which can be reconfigured to alter the execution of the computer as appreciated by those skilled in the art. The data structure maintained by the data includes a physical location in the memory, which has specific properties defined by the data format. However, the embodiments described herein are not limited. The steps and implementations described herein may be performed by hardware.
A person of ordinary skill in the art can understand that the modules included herein are described according to their functional logic, but are not limited to the above descriptions as long as the modules can implement corresponding functions. Further, the specific name of each functional module is used for distinguishing from on another without limiting the protection scope of the present disclosure.
As used herein, the term "module" can be software objects executed on a computing system. A variety of components described herein including elements, modules, units, engines, and services can be executed in the computing system. The disclosed methods, servers, and/or client terminals can be implemented in a software manner. Of course, the disclosed  methods, servers, and/or client terminals can be implemented using hardware. All of which are within the scope of the present disclosure.
In various embodiments, the disclosed modules can be configured in one apparatus (e. g. , a processing unit) or configured in multiple apparatus as desired. The modules disclosed herein can be integrated in one module or in multiple modules. Each of the modules disclosed herein can be divided into one or more sub-modules, which can be recombined in any manner.
One of ordinary skill in the art would appreciate that suitable software and/or hardware (e. g. , a universal hardware platform) may be included and used in the disclosed methods and systems. For example, the disclosed embodiments can be implemented by hardware only, which alternatively can be implemented by software products only. The software products can be stored in a computer-readable storage medium including, e. g. , ROM/RAM, magnetic disk, optical disk, etc. The software products can include suitable commands to enable a terminal device (e. g. , including a mobile phone, a personal computer, a server, or a network device, etc. ) to implement the disclosed embodiments.
Note that, the term "comprising" , "including" or any other variants thereof are intended to cover a non-exclusive inclusion, such that the process, method, article, or apparatus containing a number of elements also include not only those elements, but also other elements that are not expressly listed; or further include inherent elements of the process, method, article or apparatus. Without further restrictions, the statement "includes a ......" does not exclude other elements included in the process, method, article, or apparatus having those elements.
The disclosed embodiments disclosed herein are exemplary only. Other applications, advantages, alternations, modifications, or equivalents to the disclosed  embodiments are obvious to those skilled in the art and are intended to be encompassed within the scope of the present disclosure.
INDUSTRIAL APPLICABILITY AND ADVANTAGEOUS EFFECTS
Without limiting the scope of any claim and/or the specification, examples of industrial applicability and certain advantageous effects of the disclosed embodiments are listed for illustrative purposes. Various alternations, modifications, or equivalents to the technical solutions of the disclosed embodiments can be obvious to those skilled in the art and can be included in this disclosure.
Methods, servers, client terminals, and computer systems for controlling microphone order are provided. A server sends a first message (i. e. , a speaking status of a first client side is switched from “waiting to speak” to “ready to speak” ) that is configured to indicate a participation in a first voice conversation to the first client side. The first client side and the server are synchronized in a timeline. The server notifies the first client side or all members participating in the first voice conversation (including the first client side) of a microphone handover time point corresponding to the first client side. The microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives. The microphone handover time point refers a time point designated by the server along the timeline. Thus, the technical effect of more accurate control logic of the microphone order is achieved on the base of the separation of the control logic of the microphone order and the coding at the client side, so as to solve the technical problem that the microphone handover time point of the client side is hard  to be controlled accurately under the design of the separation of the control logic of the microphone order and the hard coding at the client side in current technology.
REFERENCE SIGN LIST
Server 100
Client terminal 102
Client terminal 104
Client terminal 106
Client terminal 108
Client terminal 110
First sending unit 902
Notifying unit 904
Synchronizing unit 1002
Calibrating module 1102
Notifying module 1104
Obtaining unit 1202
Second sending unit 1302
Receiving unit 1402
First receiving unit 1502
Switching unit 1504
Second receiving unit 1602
Second parsing unit 1604
Server 1702
Client terminal 1704
Environment 2000
Communication network 2002
Server 2004
Terminal 2006
Computing system 2100
Processor 2102
Storage medium 2104
Monitor 2106
Communications 2108
Database 2110
Peripherals 2112
Bus 2114

Claims (14)

  1. A method for controlling microphone order, comprising:
    sending, by a server, a first message that is configured to indicate a participation in a first voice conversation to a first client side when a speaking status of the first client side is switched from “waiting to speak” to “ready to speak” , wherein the first client side and the server are synchronized in a timeline; and
    notifying, by the server, the first client side or all members participating in the first voice conversation including the first client side of a microphone handover time point corresponding to the first client side, wherein the microphone handover time point is configured to indicate that the speaking status of the first client side is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline for microphone handover in the first voice conversation.
  2. The method according to claim 1, wherein, after the notifying the first client side of the microphone handover time point by the server, the method further comprises:
    sending, by the server, a second message that is configured to indicate the participation in the first voice conversation client side to the second client side when a speaking status of a second client side is switched from “waiting to speak” to “ready to speak” and when the server reaches the microphone handover time point, wherein the second client side is located next to the first client side in a first sequence of members, wherein the first sequence of members is configured to indicate a speaking order of members participating in the first voice conversation; and
    notifying, by the server, the second client side or all members participating in the first voice conversation including the first client side and the second client side of the microphone handover time point corresponding to the second client side.
  3. The method according to claim 1, wherein, before the notifying the first client side of the microphone handover time point by the server, the method further comprises:
    obtaining, by the server, the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client side, wherein the microphone takeover time point refers a time point along the timeline when the speaking status of the first client side is switched from “waiting to speak” to “ready to speak” .
  4. The method according to any claim of claims 1-3, wherein, before notifying the first client side of the microphone handover time point by the server, the method further comprises:
    notifying, by the server, the first client side of a client side time used for synchronization obtained from a server time and an information transmitting time length between the server and the first client side; or
    notifying, by the sever, each member of the corresponding client side time used for synchronization obtained from the server time and the information transmitting time length between the server and each member participating in the first voice conversation including the first client side.
  5. The method according to claim 1, further comprising:
    sending, by the server, member identification (ID) information and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation, wherein the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation; or
    sending, by the server, the member ID information and the destination order information of a member who needs microphone order adjustment in a second sequence of members to one or more members participating in a second voice conversation, wherein the second sequence of members is configured to indicate the speaking order of members participating in the second voice conversation.
  6. The method according to claim 5, wherein, before sending the member ID information and the destination order information by the server, the method further comprises:
    receiving, by the server, a preset instruction corresponding to a microphone order adjusting operation, wherein the microphone order adjusting operation comprises at least one of moving up the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, and moving the speaking order of the member who needs microphone order adjustment to position N, wherein N>=1;
    parsing, by the server, the received preset instruction to obtain corresponding microphone order adjusting operation, and
    obtaining the adjusted speaking order of the member who needs microphone order adjustment based on the parsed microphone order adjusting operation as the destination order information.
  7. A method for controlling microphone order, comprising:
    receiving a first message sent by a server that is configured to indicate a participation in a first voice conversation and switching a speaking status of the client side from “waiting to speak” to “ready to speak” based on the first message when the speaking status of a client side is switched from “waiting to speak” to “ready to speak” , wherein the client side and the server are synchronized in a timeline; and
    receiving a microphone handover time point notified by the server corresponding to the client side, and switching the speaking status of the client side from “ready to speak” to “waiting to speak” when the microphone handover time point of the client side arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline.
  8. The method according to claim 7, further comprising:
    receiving member ID information and destination order information sent by the server of a member who needs microphone order adjustment in a first sequence of members, wherein the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation; and
    parsing the member ID and the destination order information, and obtaining the adjusted third sequence of members.
  9. A server, comprising:
    a first sending unit configured to send a first message that is configured to indicate a participation in a first voice conversation to a first client terminal when a speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” , wherein the first client terminal and the server are synchronized in a timeline; and
    a notifying unit configured to notify the first client terminal or all members participating in the first voice conversation including the first client terminal of a microphone handover time point corresponding to the first client terminal, wherein the microphone handover time point is configured to indicate that the speaking status of the first client terminal is switched from “ready to speak” to “waiting to speak” when the microphone handover time point arrives, wherein the microphone handover time point refers a time point designated by the server along the timeline.
  10. The server according to claim 9, wherein:
    the first sending unit is further configured to send a second message that is configured to indicate the participation in the first voice conversation to a second client terminal when the speaking status of the second client terminal is switched from “waiting to speak” to “ready to speak” and when the server reaches the microphone handover time point, wherein the second client side is located next to the first client terminal in the first sequence of members, wherein the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation; and
    the notifying unit is further configured to notify the second client terminal or all members participating in the first voice conversation including the first client terminal and the second  client terminal of the microphone handover time point corresponding to the second client terminal.
  11. The server according to claim 9 further comprises:
    an obtaining unit configured to obtain the microphone handover time point based on a microphone takeover time point and a preset microphone takeover time length corresponding to the first client terminal, wherein the microphone takeover time point refers a time point along the timeline when the speaking status of the first client terminal is switched from “waiting to speak” to “ready to speak” .
  12. The server according to any claim of claims 9-11 further comprises:
    a synchronizing unit configured to notify the first client terminal of a client terminal time used for synchronization obtained from a server time and an information transmitting time length between the server and the first client terminal, or to notify each member of the corresponding client terminal time used for synchronization obtained from the server time and the information transmitting time length between the server and each member  participating in the first voice conversation including the first client terminal.
  13. The server according to claim 9 further comprises:
    a second sending unit configured to send member ID information and destination order information of a member who needs microphone order adjustment in the first sequence of members to one or more members participating in the first voice conversation, or to send the member ID information and the destination order information of a member who needs  microphone order adjustment in a second sequence of members to one or more members participating in a second voice conversation, wherein the first sequence of members is configured to indicate the speaking order of members participating in the first voice conversation, wherein the second sequence of members is configured to indicate the speaking order of members participating in the second voice conversation.
  14. The server according to claim 13 further comprises:
    a receiving unit configured to receive a preset instruction corresponding to a microphone order adjusting operation, wherein the microphone order adjusting operation comprises at least one of moving up the speaking order of the member who needs microphone order adjustment, moving down the speaking order of the member who needs microphone order adjustment, moving the speaking order of the member who needs microphone order adjustment to a second position next to the current position; and
    a parsing unit configured to parse the received preset instruction to obtain corresponding microphone order adjusting operation, and to obtain the adjusted speaking order of the member who needs microphone order adjustment based on the parsed microphone order adjusting operation as the destination order information.
PCT/CN2014/085753 2013-09-12 2014-09-02 Methods and systems for controlling microphone order WO2015035865A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310416694.2A CN104468465B (en) 2013-09-12 2013-09-12 Wheat sequence controlling method, server, client and computer system
CN201310416694.2 2013-09-12

Publications (1)

Publication Number Publication Date
WO2015035865A1 true WO2015035865A1 (en) 2015-03-19

Family

ID=52665055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/085753 WO2015035865A1 (en) 2013-09-12 2014-09-02 Methods and systems for controlling microphone order

Country Status (2)

Country Link
CN (1) CN104468465B (en)
WO (1) WO2015035865A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104901820A (en) * 2015-06-29 2015-09-09 广州华多网络科技有限公司 System, device and method for speaking sequence control
CN105827498A (en) * 2015-01-05 2016-08-03 腾讯科技(深圳)有限公司 Multiplayer real-time interaction control method and device
CN112003711A (en) * 2020-07-31 2020-11-27 北京达佳互联信息技术有限公司 Wheat connecting method and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105120306A (en) * 2015-08-28 2015-12-02 广州酷狗计算机科技有限公司 Microphone use duration control method and device
CN108495074B (en) * 2018-03-28 2021-02-02 武汉斗鱼网络科技有限公司 Video chat method and device
CN111369105A (en) * 2020-02-14 2020-07-03 广州酷狗计算机科技有限公司 Wheat order management method, device and system and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159946A (en) * 2007-11-16 2008-04-09 中兴通讯股份有限公司 Floor control method of honeycomb push-to-talk service and honeycomb push-to-talk server
CN101785329A (en) * 2007-08-20 2010-07-21 思科技术公司 Floor control over high latency networks in an interoperability and collaboration system
CN102075874A (en) * 2011-01-24 2011-05-25 北京邮电大学 Method and system for performing distributed queue control on speech right in PoC session

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100652650B1 (en) * 2004-07-28 2006-12-06 엘지전자 주식회사 System and method of providing push-to-talk service for synchronization in service shadow area
CN101562477A (en) * 2008-04-15 2009-10-21 北京易路联动技术有限公司 Method and system for time management, client and server based on mobile internet
CN102130774A (en) * 2011-04-27 2011-07-20 苏州阔地网络科技有限公司 Method and system for displaying microphone state of users in web conference

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101785329A (en) * 2007-08-20 2010-07-21 思科技术公司 Floor control over high latency networks in an interoperability and collaboration system
CN101159946A (en) * 2007-11-16 2008-04-09 中兴通讯股份有限公司 Floor control method of honeycomb push-to-talk service and honeycomb push-to-talk server
CN102075874A (en) * 2011-01-24 2011-05-25 北京邮电大学 Method and system for performing distributed queue control on speech right in PoC session

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827498A (en) * 2015-01-05 2016-08-03 腾讯科技(深圳)有限公司 Multiplayer real-time interaction control method and device
CN104901820A (en) * 2015-06-29 2015-09-09 广州华多网络科技有限公司 System, device and method for speaking sequence control
CN104901820B (en) * 2015-06-29 2018-11-23 广州华多网络科技有限公司 A kind of wheat sequence controlling method, device and system
CN112003711A (en) * 2020-07-31 2020-11-27 北京达佳互联信息技术有限公司 Wheat connecting method and device
CN112003711B (en) * 2020-07-31 2023-01-20 北京达佳互联信息技术有限公司 Wheat connecting method and device

Also Published As

Publication number Publication date
CN104468465A (en) 2015-03-25
CN104468465B (en) 2019-05-24

Similar Documents

Publication Publication Date Title
WO2015035865A1 (en) Methods and systems for controlling microphone order
US11153620B2 (en) Media broadcasting method, server, terminal device, and storage medium
US20140331135A1 (en) Digital content connectivity and control via a plurality of controllers that are treated as a single controller
US9706047B2 (en) Video presence sharing
CN111225230B (en) Management method and related device for network live broadcast data
US9628531B2 (en) Systems and methods for controlling client behavior in adaptive streaming
US20140244235A1 (en) System and method for transmitting multiple text streams of a communication in different languages
US9686506B2 (en) Method, apparatus, system, and storage medium for video call and video call control
KR20190068613A (en) Systems and methods for discontinuing streaming content provided through an inviolatory manifest protocol
US10778742B2 (en) System and method for sharing multimedia content with synched playback controls
US11019468B2 (en) Group communication forwarding to a secondary service
WO2015021873A1 (en) Method, platform server, and system of data pushing
JP2005228227A (en) Thin client system and its communication method
US20120240180A1 (en) Set-top box, earphone, and multimedia playing method
JP2006524368A (en) Client-server system and method for providing multimedia and interactive services to mobile terminals
US20130293662A1 (en) System and methods for managing telephonic communications
US9833716B2 (en) Web content sharing method, and web content providing apparatus and receiving terminal for web content sharing
CN109889468A (en) Transmission method, system, device, equipment and the storage medium of network data
US9882794B2 (en) Method, media type server and terminal device for identifying service request type
EP3445010B1 (en) Application program traffic management method, system and terminal device having the system
US20170279749A1 (en) Modular Communications
US20190238644A1 (en) User identification for digital experience controls
US20180176631A1 (en) Methods and systems for providing an interactive second screen experience
US20180020030A1 (en) Method for Transmitting Data in a Multimedia System, and Software Product and Device for Controlling the Transmission of Data in a Multimedia System
KR20130078300A (en) Server and method for executing virtual application requested from device, and the device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14843575

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/07/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14843575

Country of ref document: EP

Kind code of ref document: A1