CN109428859B - Synchronous communication method, terminal and server - Google Patents

Synchronous communication method, terminal and server Download PDF

Info

Publication number
CN109428859B
CN109428859B CN201710744130.XA CN201710744130A CN109428859B CN 109428859 B CN109428859 B CN 109428859B CN 201710744130 A CN201710744130 A CN 201710744130A CN 109428859 B CN109428859 B CN 109428859B
Authority
CN
China
Prior art keywords
communication
voice
terminal
interface
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710744130.XA
Other languages
Chinese (zh)
Other versions
CN109428859A (en
Inventor
李斌
陈晓波
冉蓉
易薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710744130.XA priority Critical patent/CN109428859B/en
Priority to CN202210044304.2A priority patent/CN114244816B/en
Publication of CN109428859A publication Critical patent/CN109428859A/en
Application granted granted Critical
Publication of CN109428859B publication Critical patent/CN109428859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses a synchronous communication method, a terminal and a server, wherein the method comprises the following steps: starting an application interface according to the touch starting instruction, and displaying a virtual character group corresponding to the first communication object and consisting of corresponding second communication objects in the historical record on the application interface; receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add a first communication object into any one communication session in a historical record; when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object on an application interface; and triggering to start a voice function in a first communication session of an application interface, sending first voice data triggered by a first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are positioned.

Description

Synchronous communication method, terminal and server
Technical Field
The invention relates to a social application communication technology in the field of electronic application, in particular to a synchronous communication method, a terminal and a server.
Background
With the continuous development of science and technology, electronic technology has also gained rapid development, and the variety of electronic products is also more and more, and people also enjoy various conveniences brought by the development of science and technology. People can enjoy comfortable life brought along with the development of science and technology through various types of electronic equipment or terminals and various functional applications installed on the terminals. For example, the social application on the terminal may be used to perform Instant Messaging (IM) with friends and relatives in a remote place through a network.
In instant messaging applications, real-time chat conversation is a very important communication mode, and is more direct and real-time than communication modes such as texts, pictures and the like. Most of the current mainstream IM applications support real-time conversation in an audio and video mode, but the establishment process of the real-time conversation is complex, the conversation can be established only by calling the initiator and answering the call by the receiver, and the conversation establishment and the real-time conversation are passive and have limited man-machine interaction performance.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention desirably provide a synchronous communication method, a terminal, and a server, which can flexibly establish synchronous communication and implement synchronous communication, and improve human-computer interaction performance.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides a synchronous communication method, which is applied to a first terminal and comprises the following steps:
starting an application interface according to a starting touch instruction, and displaying a virtual character group corresponding to a first communication object and a second communication object in a historical record on the application interface;
receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add the first communication object into any one communication session in the history record;
when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object in the application interface;
and triggering to start a voice function in the first communication session of the application interface, sending first voice data triggered by the first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are located.
The embodiment of the invention provides a synchronous communication method, which is applied to a server and comprises the following steps:
receiving a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record added by the first communication object;
when the application interface corresponding to the identifier of the first communication session is not found, responding to the voice communication request message, establishing the application interface corresponding to the identifier of the first communication session, and generating a voice interface establishment completion message;
sending the voice interface establishment completion message to the first terminal;
receiving a message for establishing a real-time data channel sent by the first terminal, and establishing a real-time data channel with the first terminal according to the message for establishing the real-time data channel;
and when the real-time data channel is established, sending a communication permission message to the first terminal.
An embodiment of the present invention provides a first terminal, including:
the display unit is used for starting an application interface according to a starting touch instruction, displaying a corresponding first communication object on the application interface and a virtual character group formed by corresponding second communication objects in a historical record;
a first receiving unit, configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history record;
the communication unit is used for completing the establishment of a first communication session of the first communication object and the second communication object on the application interface when the voice communication request is allowed;
a starting unit, configured to trigger starting of a voice function in the first communication session of the application interface,
a first sending unit, configured to send first voice data triggered by the first communication object to a second terminal when the voice function is turned on,
the display unit is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located.
An embodiment of the present invention provides a server, including:
a second receiving unit, configured to receive a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record added by the first communication object;
an establishing unit, configured to establish, according to the voice communication request message, an application interface corresponding to the identifier of the first communication session when the application interface corresponding to the identifier of the first communication session is not found,
the generating unit is used for generating a voice interface establishment completion message;
a second sending unit, configured to send the voice interface establishment completion message to the first terminal;
the second receiving unit is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing the real-time data channel;
the second sending unit is further configured to send a communication permission message to the first terminal when the real-time data channel is established.
An embodiment of the present invention provides a first computer-readable storage medium in which one or more programs are stored, and the one or more programs may be executed by one or more first processors to perform a synchronous communication method on the terminal side.
An embodiment of the present invention provides a second computer-readable storage medium, where one or more programs are stored in the second computer-readable storage medium, and the one or more programs may be executed by one or more second processors to perform the server-side synchronous communication method.
The embodiment of the invention provides a synchronous communication method, a terminal and a server, wherein an application interface is started according to a starting touch instruction, a corresponding first communication object is displayed on the application interface, and a virtual character group consisting of corresponding second communication objects in a historical record is displayed; receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add a first communication object into any one communication session in a historical record; when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object on an application interface; and triggering to start a voice function in a first communication session of an application interface, sending first voice data triggered by a first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are positioned. By adopting the technical implementation scheme, as the communication of any one communication session in the history record can be selected in the application interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, namely the first terminal receives the communication permission message, the first terminal can enter the application interface and can see which communication objects are in the application communication interface, thus when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is carried out on other communication objects, and the first terminal can display which communication session has the voice communication in progress on the second terminal by synchronizing the voice broadcast identifier and the voice state of the first communication session to the application interface of the second terminal where the second communication object is located, therefore, the first terminal provides an independent selection for establishing communication connection with the second terminal, and also provides a mechanism for independently carrying out voice communication, and synchronous communication establishment and synchronous communication realization can be flexibly carried out, so that the man-machine interaction performance is improved.
Drawings
FIG. 1 is a diagram illustrating various hardware entities in a system for performing synchronous communications in accordance with an embodiment of the present invention;
fig. 2 is a diagram of an interaction architecture between a first terminal (terminal) and a server in an embodiment of the present invention;
fig. 3 is a first flowchart of a synchronous communication method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an exemplary current communication interface provided by embodiments of the present invention;
FIG. 5 is a first schematic diagram of an exemplary voice communication interface provided by an embodiment of the present invention;
fig. 6 is a second flowchart of a synchronous communication method according to an embodiment of the present invention;
FIG. 7 is a second schematic diagram of an exemplary voice communication interface provided by an embodiment of the present invention;
FIG. 8 is a third schematic diagram of an exemplary voice communication interface provided by an embodiment of the present invention;
FIG. 9 is a fourth schematic diagram of an exemplary voice communication interface provided by an embodiment of the present invention;
fig. 10 is a flowchart of a synchronous communication method according to an embodiment of the present invention;
FIG. 11 is a fifth schematic diagram of an exemplary voice communication interface provided by an embodiment of the present invention;
fig. 12 is a flowchart of a synchronous communication according to another embodiment of the present invention;
FIG. 13 is an interaction diagram for synchronous communications according to an embodiment of the present invention;
fig. 14 is a first schematic structural diagram of a first terminal according to an embodiment of the present invention;
fig. 15 is a second schematic structural diagram of a first terminal according to an embodiment of the present invention;
fig. 16 is a first schematic structural diagram of a server according to an embodiment of the present invention;
fig. 17 is a third schematic structural diagram of a first terminal according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Fig. 1 is a schematic diagram of various hardware entities in a system for performing synchronous communication according to an embodiment of the present invention, where fig. 1 includes: one or more servers 2, terminals 1-1 to 1-5, and a network 3, wherein the network 3 includes network entities such as routers, gateways, etc., which are not shown in the figure. The terminals 1-1 to 1-5 perform information interaction with the server through a wired network or a wireless network so as to collect identification results from the terminals 1-1 to 1-5 and transmit the identification results to the server. The types of the terminals are shown in fig. 1, and include mobile phones (terminals 1-3), tablet computers or PDAs (terminals 1-5), desktop computers (terminals 1-2), PCs (terminals 1-4), all-in-one machines (terminals 1-1), and the like. The terminal is installed with various applications required by the user, such as applications with entertainment functions (e.g., video applications, audio playing applications, game applications, and reading software), and applications with service functions (e.g., map navigation applications, group buying applications, shooting applications, financing applications, and communication applications).
In the embodiments of the present invention, a terminal (for example, a first terminal, a second terminal, or the like) having a communication application installed therein and a server corresponding to the communication application are described as examples. The following describes a connection structure of modules in the synchronous communication system in the following embodiment of the present invention, taking the first terminal and the server as an example.
Specifically, as shown in fig. 2, the server includes a real-time communication data module, a real-time communication signaling module, and a state center, and the client of the communication application in the first terminal may include: and a real-time communication interface module. Wherein the content of the first and second substances,
and the real-time communication signaling module is used for realizing room management (establishing a voice communication interface) related to real-time communication, equipment management (opening or starting of a voice function and the like) of audio and the like, state management (a list of on-line conditions and the like of communication objects) of the communication objects, signaling management with a terminal and the like.
And the real-time communication data module is used for realizing real-time transmission of data types such as voice data, face feature information, functional objects and the like related to real-time communication.
And the real-time communication interface is used for realizing the arrangement, interface display and the like of virtual characters in real-time communication.
And the state center is used for storing real-time signaling and real-time data.
It should be noted that the interaction between the first terminal and the server is not only the interaction of real-time data, but also the interaction of real-time signaling, where the interaction of real-time signaling is realized by the first terminal interacting with a real-time communication signaling module of the server through a real-time signaling channel, and the interaction of real-time data is realized by the first terminal interacting with a real-time communication data module of the server through a real-time data channel.
In this embodiment of the present invention, the real-time signaling channel may be a Transmission Control Protocol (TCP) channel, and the real-time data channel may be a User Datagram Protocol (UDP) channel.
Example one
An embodiment of the present invention provides a synchronous communication method, which is applied to a first terminal side, and as shown in fig. 3, the synchronous communication method may include:
s101, starting an application interface according to the touch starting instruction, and displaying a virtual character group corresponding to the first communication object and formed by corresponding second communication objects in the historical record on the application interface.
The synchronous communication method provided by the embodiment of the invention can be suitable for a communication use scene, and the communication application can be a social application for real-time communication, such as chat software for real-time chat and conversation. The embodiments of the invention are not limiting.
The first terminal in the embodiment of the present invention may be an electronic device installed with a communication application, for example, a smart phone, a tablet, or the like. The embodiments of the invention are not limiting.
In the embodiment of the present invention, when a first user wants to use a communication application, the first user touches an icon of the communication application, and then the first terminal receives a start touch instruction for the communication application, where the start touch instruction is used to start the communication application. Wherein the first user may be a user using a communication application on the first terminal.
Illustratively, the first user clicks on the icon of "communication application 1" to launch the communication application 1.
It should be noted that, in the embodiment of the present invention, the first user touches the communication application, and the touch operation for starting the communication application may be a click, a double click, a special gesture, or the like, which is not limited in the embodiment of the present invention.
After the first terminal receives the starting touch instruction, the first terminal responds to the starting touch instruction, loads a current communication interface of the communication application (the current communication interface is the application interface at the moment), and displays a first communication object (the expression form of the first communication object is a first virtual character) of the logged-in communication application and a virtual character group of a second communication object in the history record on the current communication interface.
The method comprises the steps that after a starting touch instruction of the communication application is received by a first terminal, the first terminal responds to the starting touch instruction, starts to start the communication application, starts to load a current communication interface of the communication application, displays the current communication interface after the loading is completed, and displays a first virtual task of a first communication object which logs in the communication application and a virtual character group which is formed by second communication objects in a history record on the current communication interface. The history is history information of communication sessions with the first communication partner. Wherein the first communication object may characterize a first user who has logged in to the communication application.
It should be noted that, in the embodiment of the present invention, the presentation form of the communication object in the communication application may take virtual characters as main characters, and each virtual character has its own identifier (i.e. the identifier of the user, that is, the identifier of the communication object, etc.). Specific avatar representations embodiments of the present invention are not intended to be limiting.
Preferably, the virtual character in the embodiment of the present invention may be a three-dimensional virtual character, and a specific implementation process in the embodiment of the present invention is not limited.
Further, in the embodiment of the present invention, when the communication application is triggered for the first time, the first terminal may display a login interface when logging in the communication application, the first user may register and set the related information of the first communication object on the login interface, and after logging in, the first user may communicate with other communication objects through a function of adding a friend or the like, and when the first user does not log out, and after the first user closes the logged-in communication application, and when the first user touches the communication application again, after the communication application receives the start touch instruction for the communication application, since the first communication object is already logged in the first terminal, at this time, the login display of the first terminal may directly display the logged-in communication interface of the first communication object, that is, the first terminal responds to the start touch instruction, and loading a current communication interface of the communication application, and displaying a first communication object of the logged communication application and a history of communication carried out before the first communication object on the current communication interface. Since the communication object in the embodiment of the present invention may be displayed in the form of a virtual character, the first terminal displays the first communication object of the logged-in communication application on the current communication interface and the history interface displays a virtual character group formed by the first virtual character and the second communication object in the history which has communicated with the first communication object.
It should be noted that, in the embodiment of the present invention, the communication object represents different users that can communicate with each other, and the representation is in the form of an identifier of a virtual character that can communicate with each other. The virtual character of the communication object, the identification and the like are set when the user represented by the communication object logs in.
In the embodiment of the present invention, a setting template of a virtual character is set in a communication application, a user may set his or her virtual character through the setting template of the virtual character in a login process, and a specific user may independently select a character role of a communication object corresponding to the user, wearing of the virtual character, a nickname (i.e., an identifier) of the virtual character, and other pieces of information. The setting template of the virtual character is the image and the related information of the virtual character designed by a designer in advance, and is used when the user sets the virtual character during login.
In the embodiment of the invention, the displayed image of the first communication object is the first virtual character, and the displayed image of the second communication object in the history record is represented as a virtual character group.
It should be noted that, in the embodiment of the present invention, there may be at least one second communication object that has communicated with the first communication object, and therefore, the number of the communication objects in the history may be one or multiple, so that the virtual character group represents a general term. In the embodiment of the present invention, the virtual character groups of the communication objects in the history may be scroll-displayed by sliding, the current communication interface is limited in display, and the specific triggering manner such as a sliding operation is not limited in the embodiment of the present invention.
For the "virtual character group" mentioned in the embodiment of the present invention, there are two application scenarios in the embodiment of the present invention. First, in a scenario where a first communication object is a party of call communication (i.e., a calling party of both communication parties), a virtual character group formed by a second communication object may be referred to as a second virtual character group, which is a general term representing a group formed by the second communication object communicating with the first communication object; second, in a scene in which a certain communication object of the second communication objects is used as a calling subject to communicate with the first communication object, the communication objects (including the first communication object) in the history record of the certain communication object form a first virtual character group. That is, the virtual character group is a general term of a group in which communication targets constituting call communication are 3D virtualized to obtain virtual characters.
Illustratively, as shown in fig. 4, after a first communication object a logged in to a communication application (second world) starts the communication application, a head portrait of the first virtual character of a and a second communication object in a history, such as a communication object in group chat, a virtual character group in lie, a secretary and the like, are displayed on a current communication interface. Since the history may include group chat, the virtual character of the first communication partner a is also displayed in the group chat on the current interface.
S102, receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add a first communication object into any one communication session in the history.
After the first terminal opens the application interface according to the start touch instruction, the application interface displays the corresponding first communication object and the virtual character group (i.e. the second virtual character group) formed by the corresponding second communication object in the history record, because the first terminal displays the communication session of the second communication object which is communicated before, the first user of the first terminal can directly select the first communication session which the first user wants to communicate from the currently displayed history record, i.e. the first user touches any one communication session (the first communication session) in the selection history record, so the first terminal receives the communication instruction of the first communication session, the first terminal triggers the voice communication request according to the communication instruction to add the first communication object into the first communication session, in particular the first terminal responds to the communication instruction, and sending the voice communication request message to the server, receiving a communication permission message of the server responding to the voice communication request message, and joining the first communication object into the first communication session according to the communication permission message.
It should be noted that, after the first terminal receives the communication instruction for the first communication session, the first terminal may respond to the communication instruction, and since the communication instruction is used to instruct the first communication partner to join the first communication session, the first terminal may perform a process of joining the first communication session. Specifically, the first terminal needs to go to the server to perform a voice communication request message, where the voice communication request message is used to request a communication connection with the first communication session, and therefore, the first terminal sends the voice communication request message to the server, and the server sends a permission communication message to the first terminal in response to the voice communication request, and allows the first terminal to join the first communication session for voice communication. Wherein, the allowed communication message is used for representing that the first terminal has already established communication connection with the server, and interaction of communication data can be carried out.
It should be noted that, in the embodiment of the present invention, the server is an application server corresponding to the communication application. When the first terminal uses the communication application, information interaction of communication connection with a server corresponding to the communication application is required, and a specific interaction process will be described in detail in the following embodiment and the subsequent embodiments.
In the embodiment of the present invention, the voice communication request message carries the identifier of the first communication session, so that the server can perform communication connection with the first terminal through the identifier of the first communication session.
It should be noted that, in the embodiment of the present invention, the main communication method between the communication objects may be voice communication, and the image between the communication objects is shown as a virtual character, but other communication in the form of characters, pictures, expressions, or the like may also be performed as auxiliary communication, and the embodiment of the present invention is not limited.
Specifically, the specific implementation process of S102 may include: the first terminal responds to the communication instruction and sends a voice communication request message to the server; receiving a voice interface establishment completion message fed back by a server in response to the communication instruction; establishing a completion message according to the voice interface, and sending a message for establishing a real-time data channel to the server; and receiving an allowance communication message fed back by the server in response to the real-time data channel message, wherein the allowance communication message is used for representing that the voice communication request is allowed. The specific process will be explained in the following examples.
It should be noted that, in the embodiment of the present invention, the first user may also select a new communication object to perform a communication session through an address book in the communication application, or the first user performs a communication session by adding a new communication object, and specifically, the first user triggers the selection of the communication session through which form. However, regardless of the manner by which the first user may trigger a communication session, for the first terminal, the first terminal receives a communication instruction for the communication session, which is the first user-triggered communication session, i.e., the role of the first communication session above.
S103, when the voice communication request is allowed, establishing the first communication session between the first communication object and the second communication object is completed on the application interface.
After the first terminal triggers a voice communication request to the server according to the communication instruction, after the first terminal receives a communication permission message of the server responding to the voice communication request message, the first terminal can perform communication data interaction because the communication permission message represents that the server permits the first terminal to join the first communication session, namely the voice communication request is permitted, namely the communication permission message is used for representing that the first terminal has already established communication connection with the server, so that the first terminal represents that the first communication session of the first communication object and the second communication object is established on the current application interface.
S104, triggering and starting a voice function in a first communication session of an application interface, sending first voice data triggered by a first communication object to a second terminal when the voice function is started, and synchronously displaying a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located.
After the first terminal completes establishment of the first communication session between the first communication object and the second communication object through an application interface (i.e., a voice communication interface), the first terminal already establishes a communication connection with the server, and can perform interaction of communication data with the second terminal through the server. The method comprises the steps that a first terminal triggers and starts a voice function in a first communication session of an application interface, when the voice function is started, first voice data triggered by a first communication object are sent to a second terminal, and the current voice state and the voice broadcast identification of the established communication session are synchronously displayed in the application interface where the first communication object and the second communication object are located. The voice state at this time is the voice state of the first virtual character, and the voice broadcast identifier at this time is the voice broadcast identifier of the first communication session.
Since the first virtual character and the second virtual character can be displayed on the voice communication interface, the first communication object which the first terminal has logged in can carry out voice communication with the second communication object in the voice communication interface. In this embodiment of the present invention, the voice communication interface may be provided with a voice function, and when the voice function is turned on (the voice function is triggered to be turned on in the first communication session of the voice communication interface), that is, when the voice function is turned on, it is represented that the first communication object can perform real-time voice communication using the voice communication interface on the first terminal, and thus, the first user corresponding to the first communication object can speak, so that the first terminal receives the first voice data of the first communication object, and can also send the first voice data triggered by the first communication object to the second terminal. Since the first communication object and the second communication object are displayed in the voice communication interface, that is, more than one communication object is displayed in the current voice communication interface, when one of the communication objects speaks, that is, when the first terminal receives voice data of one communication object, the first terminal can synchronously display which communication object is speaking. That is, the first voice data of the first communication object is received at the first terminal, and the first terminal can synchronously display the voice state of the first avatar. Because the first virtual character is the presentation form of the first communication object, the voice state of the first virtual character on the voice communication interface is that when the first virtual character is in a call, the first virtual character represents that the first communication object is speaking. Meanwhile, the first terminal can synchronously display the voice broadcast identification of the first communication session, and the voice broadcast identification and the voice state are synchronized to the second terminal through the server, so that the second terminal can synchronously display the voice broadcast identification and the voice state of the first communication session, and thus, a user corresponding to the second terminal can know which communication session is in voice communication and which communication in the voice communication is speaking, and the selection of the subsequent user for selecting the voice communication is facilitated. The second terminal is a terminal device corresponding to the online second communication object in the first communication session.
In the embodiment of the invention, when the voice function is opened, when the first communication object is not speaking, the state that the online second communication object carries out voice communication may exist, so that the second terminal logging in the online second communication object can send the second voice data to the server in real time, and the server forwards the second voice data to the first terminal in real time, so that the first terminal can receive and play the voice data of the second communication object on the current voice communication interface, and real-time voice communication is realized.
It should be noted that, in the embodiment of the present invention, the voice state of the first avatar and the voice broadcast identifier of the first communication session are also synchronized on the second terminal through the server.
Optionally, in the embodiment of the present invention, the expression form of the voice state of each virtual character may be a sound wave-shaped call identifier, or may also be a text prompt, and the like. The displayed location of the representation of the speech state may also be in the vicinity of the virtual character's logo, which may facilitate identification of the virtual character's characterized identity, although embodiments of the invention are not limited thereto.
Illustratively, in the voice communication interface corresponding to the first communication session shown in fig. 5, when the first avatar of the first communication object is an aging wave, and the first user clicks the voice function button on the voice communication interface displayed on the first terminal, that is, the first user clicks the voice function button
Figure BDA0001389758960000121
When the first terminal is lighted, the first user can carry out voice call (namely, real-time chat is opened), so that the first terminal receives first voice data representing a first communication object of the first user, and synchronously displays the voice state beside the identification of the virtual character of the first communication object as calling, namely displays a voice wave-shaped call identification.
Further, the voice communication interface displayed by the first terminal may further be provided with other functions such as adding communication friends, so that other communication objects may be invited to perform real-time communication in the voice communication, as shown in fig. 5 in the voice communication interface
Figure BDA0001389758960000131
The keys can be used as function keys for adding other functions such as communication friends and the like.
Further, when the voice function is opened, or the voice function with a communication object in the voice communication interface is opened, the display area of the first communication session displays the voice broadcast identifier to represent that someone in the first communication session is in the voice, so that an offline communication object (a corresponding second terminal) can directly see that someone in the first communication session is in communication through the history record of the current communication interface of the online communication object, and thus, users corresponding to other communication objects can join in the voice communication by opening the first communication session. And, the display area of the first communication session on the second terminal will also display the voice broadcast identification synchronously, and the display of the voice broadcast identification in the second terminal is consistent with the voice broadcast identification on the display area shown in fig. 4.
Illustratively, as shown in fig. 4 and 5, the voice broadcast identification is displayed next to the identification (buddy) of the first communication session
Figure BDA0001389758960000132
Furthermore, after the first terminal synchronously displays the voice state of the first avatar, the first terminal needs to send the first voice data of the first communication object to the server, so that the server can transmit the voice data to the second terminal corresponding to the second communication object in real time, so that the second communication object which is on line can listen to the voice of the first communication object in real time, and real-time communication between the communication objects is realized.
It can be understood that, since the communication of the first communication session in the history can be selected in the current communication interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, that is, the first terminal receives the communication permission message, the first terminal can enter the voice communication interface and can see which communication objects are in the voice communication interface, so that when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is performed on other communication objects, and the first terminal synchronizes the voice broadcast identifier of the first communication session to the second terminal through the server, so that the second terminal can display which communication session has the voice communication ongoing, so that the first terminal provides an autonomous selection for establishing the communication connection with the server, and a mechanism for independently carrying out voice communication is also provided, the establishment of synchronous communication and the realization of synchronous communication can be flexibly carried out, and a new realization form that the virtual character represents the identity of a communication object is provided in a voice communication interface, so that the man-machine interaction performance is improved.
Further, after S103, as shown in fig. 6, the method for synchronous communication according to the embodiment of the present invention may further include: S105-S108. The following were used:
and S105, displaying the first virtual character and the virtual character group corresponding to the first communication object on the application interface.
In the embodiment of the present invention, after the first terminal triggers the voice communication request according to the communication instruction, the first terminal loads an interface for voice communication (i.e. an application interface at this time) according to the communication permission message (i.e. when the voice communication request is permitted), that is, after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the first terminal may display the first virtual character and the second virtual character group corresponding to the first communication object on the interface for voice communication.
It should be noted that, in the embodiment of the present invention, since the avatar of the communication object is shown as a virtual character, and the communication mode is mainly voice communication, in the embodiment of the present invention, a virtual character of the communication object in the current first communication session is displayed on the voice communication interface, and the first communication object is also joined in the first communication session, therefore, the voice communication interface of the first terminal displays the first virtual character and a second virtual character in a second virtual character group, where the second virtual character is a virtual character in the second virtual character group corresponding to the second communication object, and the second communication object is a communication object in the first communication session different from the first communication object.
It should be noted that, in the embodiment of the present invention, when the first terminal responds to the permission communication message and enters the voice communication interface, since the first communication session selected by the first user may be a group chat session, in such a case, the group chat already includes the first communication object, and therefore, virtual characters of all communication objects of the first communication session, that is, the first virtual character and the second virtual character, are displayed on the voice communication interface; when the first communication session selected by the first user is a single communication object, the second communication object in the first communication session is originally a communication object different from the first communication object, so that the first terminal enters the voice communication interfaces of the first communication object and the second communication object, and the first virtual character and the second virtual character are displayed.
In the embodiment of the present invention, the corresponding areas of the virtual character all have identifiers representing the virtual character (i.e. identifiers of communication objects), so as to perform the identity of the communication objects of multiple virtual characters in the same voice communication interface.
It should be noted that, in the embodiment of the present invention, an online status prompt identifier, for example, an online prompt light, may be further disposed in the area where the identifier of the virtual character is located. Therefore, which communication objects are online in the voice communication interface corresponding to the current first communication session can be judged through the online state prompt identification, and the fact that the communication objects are online means that the communication objects log in the communication application at the moment and enter the voice communication interface of the first communication session. The specific implementation form of the online status prompt identifier may be different color identifiers, text prompt, and the like, and the embodiment of the present invention is not limited.
Specifically, the server maintains a list of all chat members (list of online communication objects) in a current real-time room (voice communication interface) through real-time signaling management, and when a communication object joins (i.e., the communication object opens a first communication session on its terminal, i.e., goes online) or exits (i.e., closes the first communication session on its terminal, i.e., goes offline), the server updates the list of chat members in real time and then synchronizes other members currently in the real-time room, so that the online status of each communication object can be synchronously displayed on the voice communication interface, i.e., the online status prompt identifier of each communication object is updated in real time.
For example, as shown in fig. 7, it is assumed that the second communication object in the first communication session ("buddy") of the first communication object may be cat and lie, the first communication object may be an aging wave, and in this case, when the first user selects the "buddy" of the communication session, the voice communication interface of the "buddy" is entered, the voice communication interface displays virtual characters and identifications thereof corresponding to the cat, the plum crystal and the display wave respectively, and the front of the virtual character corresponding to the cat, the plum crystal and the display wave are respectively provided with an on-line prompting lamp, the on-line indicator light is turned on to indicate on-line, and the off-line indicator light is turned off to indicate off-line, as shown in figure 7, the on-line prompting lamp in front of the cat and the aging wave in the virtual character is lightened, and the cat and the aging wave enter the voice communication interface at the moment, namely, the cat and the aging wave can carry out real-time voice communication; the li-li is not on-line, i.e. does not enter the voice communication interface at the moment.
In the embodiment of the present invention, the voice communication interface may also be characterized as a voice room of real-time voice communication entered by a communication object, and the embodiment of the present invention is not limited in specific form.
And S106, receiving the communication function touch instruction, responding to the communication function touch instruction, and calling a function selection interface in a second display area of the application interface.
After the first terminal displays the first virtual character and the virtual character group on the voice communication interface, a communication function touch key may be disposed on the displayed voice communication interface (application interface) in the embodiment of the present invention, where the communication function touch key is a function key that can implement other forms of communication with other communication objects. That is to say, in the embodiment of the present invention, in addition to the voice communication mode, other touch keys with communication functions may be further disposed in the voice communication interface to enable the first communication object and the second communication object to perform other types of communication. In this way, when the first user triggers the communication function touch key, the first terminal receives the communication function touch instruction, and then the first terminal responds to the communication function touch instruction and communicates with the second communication object in the current voice communication interface, specifically, the first terminal can respond to the communication function touch instruction, call out the corresponding function selection interface in the second display area of the voice communication interface, and select the specific function object on the function selection interface, thereby implementing the communication function represented by the function object. The second display area is a partial area in the current voice communication interface. Optionally, in this embodiment of the present invention, the second display area may be a lowermost area of the voice communication interface.
Here, in the embodiment of the present invention, the voice function key, the added communication friend function key, the communication function touch key, and other touch keys may be displayed in the second display area, that is, the touch keys are set and managed in a unified area, which is convenient for users to use and operate. Of course, the setting area of each touch key is not limited in the embodiments of the present invention.
For example, in the voice communication interface shown in fig. 5, touch keys such as the voice function key, the add communication friend function key, and the communication function touch key are all displayed in the display area 1 (the second display area), in this case, the communication function touch key on the first terminal in the embodiment of the present invention may be an expressive function key, for example,
Figure BDA0001389758960000161
etc., or may be character input, etc., originally sent inThe embodiment of the invention does not limit the functional types of the touch keys with the communication function.
It should be noted that, in the embodiment of the present invention, the communication function touch key may be provided with a plurality of keys corresponding to different functions, or one communication function touch key may be provided with function objects having the same function and different expressions, and a specific setting or implementation manner is not limited in the embodiment of the present invention.
And S107, receiving a selection instruction on the function selection interface, responding to the selection instruction, and correspondingly displaying the function object selected by the selection instruction and the first virtual character on the voice communication interface.
After the first terminal receives the communication function touch instruction and responds to the communication function touch instruction, and after the function selection interface is called in the second display area of the voice communication interface, as the communication function touch button is touched, a plurality of communication function implementation manners corresponding to the communication function touch button can be provided, so that a plurality of function objects corresponding to the communication function can be displayed on the function selection interface, and one to be implemented needs to be selected from the plurality of function objects on the function selection interface, namely, the first terminal receives the selection instruction on the function selection interface and responds to the selection instruction, and the function object selected by the selection instruction is correspondingly displayed on the voice communication interface with the first virtual character.
In the embodiment of the present invention, the communication function touch key may be an individualized function key such as an emoticon, and the expressive form of the emoticon may be a preset emoticon, or a preset body figure icon, and the specific embodiment of the present invention is not limited. The function object is a selected one of the preset emoticons or a selected one of the preset limb character icons.
In the embodiment of the invention, a first user selects one functional object from a plurality of functional objects on a function selection interface, namely a first terminal receives a selection instruction, when the first user selects one of preset emoticons, the selection instruction is an expression selection instruction, and the functional object at this time is a limb object; when the first user selects one of the preset limb character icons, the selection instruction is a limb selection instruction, and the functional object at the moment is an expression object.
Specifically, when the selection instruction is a limb selection instruction, the first terminal responds to the selection instruction, synchronously maps the limb object selected by the selection instruction to the first virtual character, and then displays the first virtual character. And when the selection instruction is an expression selection instruction, responding to the selection instruction, calling out a function display area corresponding to the first virtual character on the voice communication interface, and displaying the selected expression object.
It should be noted that, in the embodiment of the present invention, after the first user selects a limb object, the first terminal may map the selected limb object on the first virtual character synchronously, that is, the first virtual character may implement the action of the selected limb object; and after the first user selects the expression object, the first terminal can display the selected expression object in the corresponding area of the first virtual character so as to change the expression mood of the first virtual character at the moment. The realization of the communication function can more vividly show the richness of the communication between the communication objects and the interest of the interaction.
Illustratively, when the selection instruction is an emoticon selection instruction, as shown in fig. 8, a function selection interface on the display area 1 on the voice communication interface displays a plurality of emoticons, and the first terminal selects one of the plurality of emoticons according to the selection instruction, for example,
Figure BDA0001389758960000181
then, a function display area corresponding to a first virtual character (display wave) on the first terminal shows one
Figure BDA0001389758960000182
The function display area in the embodiment of the present invention may be a display area near the corresponding virtual character, and the embodiment of the present invention is not limited, and may be reasonably arranged.
For example, when the selection instruction is a limb selection instruction, as shown in fig. 9, a plurality of limb objects are displayed on the function selection interface on the display area 1 on the voice communication interface, and the first terminal selects one of the plurality of limb objects according to the selection instruction, for example, "clapping", then the first virtual character (standing wave) on the first terminal can be mapped to the movement of the clapping in synchronization.
Further, in the embodiment of the present invention, the preset emoticon may also be set as an expression form of an emoticon function key, and the preset limb character icon may be set as an expression form of a limb function key, where the emoticon function key and the limb function key are two communication function touch keys, that is, are implemented by using the two communication function touch keys.
And S108, sending the functional object to a server.
After the first terminal receives the selection instruction on the function selection interface and responds to the selection instruction, and the function object selected by the selection instruction and the first virtual character are correspondingly displayed on the voice communication interface, because in the synchronous communication method provided by the embodiment of the invention, the first communication object logged in the first terminal needs to be in real-time communication with the second communication object, when the first user sends communication modes such as expressions or limbs to the first communication object, the first terminal needs to send the function objects to the server, so that the server can forward the function object to the second terminal corresponding to the second communication object in communication with the first communication object, and the real-time communication between the function object and the second communication object is realized. Specific implementations will be described in detail in the following embodiments and subsequent embodiments.
In the embodiment of the invention, the preset emoticons mainly comprise smiles, anger and the like; or presetting limb figure icons which mainly comprise hugs, dances and the like. When a certain communication object sends an emoticon or a limb character icon, the emoticon is converted into expression data or limb data and sent to the server, and when the server forwards the expression data or the limb data to terminals of other communication objects, the expression data or the limb data are restored to corresponding expressions or limb animations and act on corresponding virtual characters.
Further, as shown in fig. 10, after S103 and before S105, the method for synchronous communication according to the embodiment of the present invention may further include: S109-S112. The following were used:
and S109, acquiring a face image of the first communication object in real time, and displaying the face image in a first display area of the application interface.
The first terminal in the embodiment of the present invention has a function of real-time facial expression synchronization when using a communication application, so that the first terminal triggers a voice communication request according to the communication instruction, responds to a communication permission message, and loads a voice communication interface (application interface), that is, after the application interface completes establishment of a first communication session between a first communication object and a second communication object, a front camera or a front image capturing device of the first terminal is turned on, starts to capture a facial image of a first user represented by the first communication object in real time, and displays the facial image in a first display area of the voice communication interface (application interface).
It should be noted that, in the embodiment of the present invention, the function of synchronizing the real-time facial expressions is a function of synchronously mapping changes of actual human expressions or facial features and the like on the faces of the corresponding virtual characters.
In the embodiment of the present invention, the first display area may be an upper right area of the voice communication interface, and when the first communication object performs real-time communication with the second communication object, the face image of the first user corresponding to the first communication object is displayed in real time in the first display area.
And S110, recognizing the face feature information of the face image.
And S111, mapping the face feature information to the face feature of the first virtual character.
After the first terminal collects the face image of the first communication object, the first terminal can perform face recognition on the face image to obtain face feature information, and the first terminal can collect the face image corresponding to the first communication object in real time, so that the first terminal maps the recognized face feature information to the face feature of the first virtual character, that is, the face or facial expression or feature of the first user in reality can be in real time on the face feature of the first virtual character of the first communication object corresponding to the first terminal.
The face feature information adopted in the embodiment of the invention is a parameter for describing the face feature, and is also called a feature descriptor; the embodiment of the invention can extract the face characteristic information by extracting the positioning mode of the face key points. Based on different requirements and emphasis, the embodiments of the present invention can be selected accordingly, and in order to improve stability, the embodiments can be used in combination, specifically as follows: the face feature information of the face image at the first terminal may adopt at least one of Scale-invariant feature transform (SIFT) Features, Histogram of Oriented Gradients (HOG) Features, or Speeded Up Robust Features (SURF) extracted from the initial key point positions.
In the embodiment of the invention, the positioning of the face key points refers to accurately finding out the positions of the face key points through an algorithm. The face key points are key points with strong representation capability of the face, such as eyes, a nose, a mouth, a face contour and the like.
It should be noted that, in the embodiment of the present invention, the first terminal supports a face recognition and positioning technology, when positioning a face key point, a target object to be recognized (i.e., a face image corresponding to a first communication object) is first acquired, and when the terminal detects that the target object is the face image, the terminal may generate a target detection area for face recognition and positioning on the face image according to a preset configuration and perform labeling, so that the labeled target detection area is displayed on the face image, and the positioning of the face key point is implemented.
Optionally, the target detection area is a monitoring area set for performing target object detection, for example, a face detection frame, and the face detection frame may be in a shape of a rectangle, a circle, an ellipse, or the like.
In the following, the face feature information is taken as an example of an HOG feature value (also referred to as an HOG data feature), and in the embodiment of the present invention, the obtained HOG feature principle is used: the core idea of HOG is that the detected local object profile can be described by a distribution of intensity gradients or edge directions. By dividing the whole image into small connected regions (called cells), each cell generates a histogram of the directional gradients or the edge directions of the pixels in the cell, and the combination of these histograms can represent the (detected target object) descriptor. To improve accuracy, the local histogram can be normalized by computing the intensity of a larger region in the image (called a block) as a measure, and then normalizing all cells in this block with this value (measure). This normalization process achieves better illumination/shadow invariance.
Compared to other descriptors, HOG derived descriptors retain geometric and optical transformation invariance (unless the object orientation changes). Therefore, the HOG descriptor is particularly suitable for the detection of human faces.
Specifically, the HOG feature extraction method is to perform the following process on an image:
1. graying (treating the image as a three-dimensional image in x, y, z (gray scale));
2. dividing into small cells (2 x 2);
3. calculating the gradient (i.e. orientation) of each pixel in each cell;
4. and (4) counting the gradient histogram (the number of different gradients) of each cell to form the descriptor of each cell.
It should be noted that, in the embodiment of the present invention, the weight deviation amount may be calculated by a gradient descent method. In a word, for a given face key point position, calculating some information lists on the face key point positions to form a vector, namely extracting face feature information, then performing regression on the face feature information, namely combining each numerical value of the vector, and finally obtaining a first offset of the distance true solution of the face key point. There are many methods for extracting face feature information, including: random forest, sift, etc. the characteristics of the face at the current key point position can be expressed by using the extracted face characteristic information.
Illustratively, as shown in fig. 11b, the first terminal acquires a face image corresponding to the first communication object and displays the face image on the display area 2 (the first display area), and the first terminal extracts face feature information on the face image by using a face recognition technique, that is, a face feature point shown by a dashed box (i.e., the target detection area) in fig. 11 b. Thus, the first terminal performs synchronous mapping on the facial features of the aging wave (first virtual character) on the voice communication interface. Thus, the change of the virtual character of the display wave is changed from fig. 11a to fig. 11 b.
It can be understood that, because the first terminal can map the facial features of the actual person on the facial features of the corresponding virtual person, the actual appearance of the actual communication user, i.e., the first user, can be represented in real time, and the interestingness, the individuation and the communication effect of real-time communication are embodied.
And S112, sending the face feature information to a server.
After the first terminal identifies the face feature information of the face image, the first terminal can send the face feature information to the server, so that the server can forward the face feature information to a second terminal corresponding to a second communication object, the face feature corresponding to the first virtual character can be synchronously displayed on the second terminal, and the effect of real-time communication is achieved.
It should be noted that the facial feature information may be understood as facial expression data, which is obtained by opening a front-facing camera on the terminal, collecting facial images of a human face, identifying a current expression of the human face, such as closing an eye and opening a mouth, and representing the current expression by using a string of feature data. In the embodiment of the invention, after the first communication object is successfully added into the voice communication interface, if the number of the communication objects in the voice communication interface is more than 2 people, the recognition of the facial expression (namely the function of real-time facial expression synchronization) is automatically started, meanwhile, the facial expression data is also transmitted to other communication objects in real time, and the other communication objects restore the expression of the first virtual character of the first communication object according to the facial expression data.
Example two
Based on the first implementation of the embodiment, an embodiment of the present invention provides a synchronous communication method, which is applied to a server side, and as shown in fig. 12, a process of establishing a communication connection between a server and a first terminal in the synchronous communication method may include:
s201, receiving a voice communication request message for a first communication session sent by a first terminal, wherein the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of communication sessions in a history record added to a first communication object.
S202, when the application interface corresponding to the identifier of the first communication session is not found, establishing the application interface corresponding to the identifier of the first communication session according to the voice communication request message, and generating a voice interface establishment completion message.
S203, sending the voice interface establishment completion message to the first terminal.
And S204, receiving the message for establishing the real-time data channel sent by the first terminal, and establishing the real-time data channel with the first terminal according to the message for establishing the real-time data channel.
S205, when the real-time data channel is established, a communication permission message is sent to the first terminal.
In the embodiment of the present invention, when a first communication object logged in a first terminal wants to perform real-time communication with a second communication object in a first communication session, after receiving a communication instruction that a first user generates a first communication session by touch, the first terminal that needs to log in the first communication object first needs to establish a communication connection with a server corresponding to a communication application in response to the communication instruction, and then the first terminal sends a voice communication request message to the server (that is, a real-time communication signaling module of the server receives the voice communication request message for the first communication session sent by the first terminal), where the first communication session is any one of communication sessions in a history record joined by the first communication object, and the voice communication request message carries an identifier of the first communication session, so that the server checks whether a voice communication interface corresponding to the first communication session already exists (that is, an application interface, or may be considered a voice communication room), when the server does not inquire that a voice communication interface corresponding to the first communication session already exists, the real-time communication signaling module of the server establishes its corresponding voice communication interface using the identification of the first communication session, and generates access information (access IP) corresponding to the first communication session to the real-time communication data module to apply for data resources, after the real-time communication data module returns the allocated real-time data resource to the real-time communication signaling module, the real-time communication signaling module generates a voice interface establishment completion message after receiving the real-time data resource, and returns a voice interface setup complete message to the first terminal, that is, the server transmits the voice interface setup complete message to the first terminal, namely, the first terminal receives a voice interface establishment completion message fed back by the server in response to the communication instruction. At this time, after the first terminal receives the voice interface establishment completion message, the first terminal may start establishing a real-time data channel with the server according to the voice interface establishment completion message, that is, the first terminal sends the message of establishing the real-time data channel to the server according to the voice interface establishment completion message. The server receives a message for establishing a real-time data channel sent by the first terminal, and establishes a real-time data channel with the first terminal according to the message for establishing the real-time data channel, so that the server establishes a real-time data channel with a first communication object of the first terminal, and when the real-time data channel is established, the server returns a communication permission message to the first terminal, namely the first terminal receives a communication permission message fed back by the server in response to the real-time data channel message, so that the first terminal can perform real-time communication with a second communication object in the first communication session in a voice communication interface.
It should be noted that, in the embodiment of the present invention, the second terminal that has logged in the second communication object may also establish a connection of a real-time data channel with the server according to the manner of the first terminal, so that the first terminal and the second terminal are in the same voice communication interface of real-time data, and the first terminal and the second terminal may exchange data with the server through respective real-time data channels and forward the data to the other side through the server, thereby implementing real-time communication.
Further, in the embodiment of the present invention, the first terminal sends a voice communication request message to the server, where the voice communication request message carries an identifier of the first communication session, so that after the server checks whether a voice communication interface corresponding to the first communication session already exists (i.e., a voice communication room), when the server has inquired that the voice communication interface corresponding to the first communication session already exists, it indicates that there is a member in the first communication session in communication, and thus, the server directly returns a voice interface setup completion message to the first terminal, that is, the first terminal receives a voice interface setup completion message fed back by the server in response to the communication instruction. At this time, after the first terminal receives the voice interface establishment completion message, the first terminal needs to start establishing the real-time data channel, that is, the first terminal sends a message of establishing the real-time data channel to the server to establish the real-time data channel.
Further, in the synchronous communication method provided by the embodiment of the present invention, after S205, the server may perform data interaction with the first terminal and the second terminal. The method specifically comprises the following steps: S206-S208. The following were used:
s206, receiving first voice data sent by the first terminal, and forwarding the first voice data to a second terminal, wherein the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in the history record.
And S207, receiving the face feature information sent by the first terminal, and forwarding the face feature information to the second terminal.
S208, receiving the function object sent by the first terminal, and forwarding the function object to the second terminal.
After the server establishes a real-time data channel with a first terminal, the server can perform data interaction with the first terminal, the server can transmit a plurality of real-time data such as first voice data performed by a first communication object, or face characteristics of a first virtual character, or a functional object to the server in real time, the server can forward the real-time data to a second terminal corresponding to other communication objects of the first communication object in the same voice communication interface, wherein the second communication object is a communication object in a history record, so that the second terminal can play the first voice data transmitted by the server in real time after receiving the first voice data, or the face characteristics of the first virtual character transmitted by the server are synchronously mapped on the first virtual character, or the second terminal can play the functional object of the first terminal transmitted by the server after receiving the second voice data, the function object is a function object corresponding to the first virtual character, so that the second terminal can correspondingly display or respond the function object on the first virtual character, and the specific implementation process is consistent with the process of the first terminal for implementing the function object. Therefore, when the first communication object on the first terminal carries out voice, or carries out real-time expression synchronization, or issues an emotion icon and the like, the first virtual character representing the first communication object on the second terminal also broadcasts voice correspondingly, or carries out real-time expression synchronization, or issues an emotion icon and the like through the forwarding of the server, so that the real-time communication between the corresponding communication objects on the first terminal and the second terminal is realized.
It should be noted that, in the embodiment of the present invention, the second terminal corresponding to one second communication object may also send out the communication mode or expression of the second voice data or the like that is the same as or responsive to the first communication object, and at this time, the server may also forward the real-time data generated by this second terminal to the first terminal and other second terminals corresponding to other second communication objects, so as to synchronize the real-time communication mode of the one second communication object to the other second communication objects and the first communication object to see, thereby completing the real-time communication between the communication objects in the first communication session.
Further, the server needs to synchronously forward the voice state of the first avatar and the voice broadcast identifier of the first communication session, etc. to the second terminal so as to synchronously display on the second terminal.
It can be understood that, in the embodiment of the present invention, the server may provide the functions of establishing real-time communication and forwarding communication data for the first terminal and the second terminal, so that the first terminal and the second terminal can autonomously join a communication session and receive and send real-time data such as voice data, the establishment of synchronous communication and the implementation of synchronous communication can be flexibly performed, and the human-computer interaction performance is improved.
EXAMPLE III
An embodiment of the present invention provides a synchronous communication method, as shown in fig. 13, which takes as an example that a first terminal and a second terminal corresponding to a communication object belonging to a first communication session perform voice communication, and a communication application is assumed to be a second world, the method may include:
s301, when a second world on the first terminal is opened (a starting touch instruction of the communication application is received), the first terminal loads a current communication interface of the second world, and displays a first virtual character of a first communication object of the logged-in communication application and a virtual character group formed by the communication objects in the history record on the current communication interface.
S302, the first terminal receives a communication instruction for adding the first communication object into the first communication session on the current communication interface.
And S303, the first terminal responds to the communication instruction and sends a voice communication request message to the server, wherein the voice communication request message carries the identifier of the first communication session.
S304, when the server does not find the voice communication interface corresponding to the identifier of the first communication session, the server establishes the voice communication interface corresponding to the identifier of the first communication session according to the voice communication request message and generates a voice interface establishment completion message.
S305, the server sends the voice interface establishment completion message to the first terminal.
S306, the first terminal establishes a completion message according to the voice interface and sends a message for establishing a real-time data channel to the server.
And S307, the server establishes a real-time data channel with the first terminal according to the message for establishing the real-time data channel.
And S308, when the real-time data channel is established, the server sends a communication permission message to the first terminal.
S309, the first terminal responds to the communication permission message, loads the voice communication interface, and displays the first virtual character and the virtual character group on the voice communication interface.
S310, triggering and starting a voice function in a first communication session of a voice communication interface, and when the voice function is started, receiving first voice data of a first communication object by a first terminal, and synchronously displaying a voice state of a first virtual character and a voice broadcast identifier of the first communication session.
S311, the first terminal sends the first voice data, the voice broadcast identification and the voice state to the server.
S312, the server forwards the first voice data, the voice broadcast identification and the voice state to the second terminal.
And S313, the second terminal synchronously displays the voice broadcast identification and the voice state.
It should be noted that the communication form between the first terminal and the second terminal may also be a real-time communication manner such as expressions or limbs, which is consistent with the process and principle of real-time receiving and sending of voice data, and is not described herein again.
Example four
Based on the implementation of the first to third embodiments, as shown in fig. 14, an embodiment of the present invention provides a first terminal 1, which corresponds to a synchronous communication method on a first terminal side, where the first terminal 1 includes:
the display unit 10 is used for starting an application interface according to a starting touch instruction, and displaying a corresponding first communication object and a virtual character group formed by corresponding second communication objects in a historical record on the application interface;
a first receiving unit 11, configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history;
the communication unit 12 is configured to complete establishment of a first communication session between the first communication object and the second communication object in the application interface when the voice communication request is allowed;
an initiating unit 13, configured to trigger to start a voice function in the first communication session of the application interface,
a first sending unit 14, configured to send first voice data triggered by the first communication object to a second terminal when the voice function is turned on,
the display unit 10 is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located.
Optionally, the display unit 10 is further configured to display, on the application interface, a first virtual character and the virtual character group corresponding to the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object.
Optionally, based on fig. 14, as shown in fig. 15, the first terminal 1 further includes: the acquisition unit 15, the identification unit 16 and the mapping unit 17;
the acquiring unit 15 is configured to acquire, in real time, a facial image of the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object and before the application interface displays the first virtual character and the virtual character group corresponding to the first communication object,
the display unit 10 is further configured to display the face image in a first display area of the application interface;
the recognition unit 16 is configured to recognize face feature information of the face image;
the mapping unit 17 is configured to map the facial feature information onto facial features of the first virtual character;
the first sending unit 14 is further configured to send the facial feature information to the server.
Optionally, the first receiving unit 11 is configured to receive a communication function touch instruction after the first avatar and the avatar group corresponding to the first communication object are displayed on the application interface,
the display unit 10 is further configured to respond to the communication function touch instruction and call a function selection interface in a second display area of the voice communication interface;
the first receiving unit 11 is further configured to receive a selection instruction on the function selection interface,
the display unit 10 is further configured to respond to the selection instruction, and display the functional object selected by the selection instruction and the first virtual character on the voice communication interface in a corresponding manner;
the first sending unit 14 is further configured to send the function object to the server.
Optionally, the display unit 10 is specifically configured to, when the selection instruction is a limb selection instruction, respond to the selection instruction, map the limb object selected by the selection instruction to the first virtual character synchronously, and then display the first virtual character.
Optionally, the display unit 10 is specifically configured to, when the selection instruction is an expression selection instruction, respond to the selection instruction, call up a function display area corresponding to the first virtual character on the voice communication interface, and display the selected expression object.
Optionally, the first sending unit 14 is specifically configured to send the voice communication request message to the server in response to the communication instruction;
the first receiving unit 11 is specifically configured to receive a voice interface establishment completion message that is fed back by the server in response to the communication instruction;
the first sending unit 14 is further specifically configured to send a message for establishing a real-time data channel to the server according to the message for establishing the voice interface;
the first receiving unit 11 is further specifically configured to receive the permission communication message that the server responds to the real-time data channel message feedback, where the permission communication message is used to characterize that the voice communication request is permitted.
It can be understood that, since the communication of the first communication session in the history can be selected in the current communication interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, that is, the first terminal receives the communication permission message, the first terminal can enter the voice communication interface and can see which communication objects are in the voice communication interface, so that when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is performed on other communication objects, and the first terminal synchronizes the voice broadcast identifier of the first communication session to the second terminal through the server, so that the second terminal can display which communication session has the voice communication ongoing, so that the first terminal provides an autonomous selection for establishing the communication connection with the server, and a mechanism for independently carrying out voice communication is also provided, the establishment of synchronous communication and the realization of synchronous communication can be flexibly carried out, and a new realization form that the virtual character represents the identity of a communication object is provided in a voice communication interface, so that the man-machine interaction performance is improved.
As shown in fig. 16, an embodiment of the present invention provides a server 2, where the server 2 may include:
a second receiving unit 20, configured to receive a voice communication request message for a first communication session sent by the first terminal, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record joined by the first communication object;
a establishing unit 21, configured to, when the application interface corresponding to the identifier of the first communication session is not found, establish an application interface corresponding to the identifier of the first communication session according to the voice communication request message,
a generating unit 22, configured to generate a voice interface setup completion message;
a second sending unit 23, configured to send the voice interface establishment completion message to the first terminal;
the second receiving unit 20 is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing a real-time data channel;
the second sending unit 23 is further configured to send a communication permission message to the first terminal when the real-time data channel is completely established.
Optionally, the second receiving unit 20 is further configured to receive the first voice data sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit 23 is further configured to forward the first voice data to a second terminal, where the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in a history record;
the second receiving unit 20 is further configured to receive the facial feature information sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit 23 is further configured to forward the face feature information to the second terminal;
the second receiving unit 20 is further configured to receive the function object sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit 23 is further configured to forward the function object to the second terminal.
It can be understood that, in the embodiment of the present invention, the server may provide the functions of establishing real-time communication and forwarding communication data for the first terminal and the second terminal, so that the first terminal and the second terminal can autonomously join a communication session and receive and send real-time data such as voice data, the establishment of synchronous communication and the implementation of synchronous communication can be flexibly performed, and the human-computer interaction performance is improved.
EXAMPLE five
Based on the same inventive concept of the first to third embodiments, as shown in fig. 17, an embodiment of the present invention provides a first terminal, corresponding to a synchronous communication method on a first terminal side, where the first terminal includes: a first receiver 17, a first transmitter 18, a first memory 19, a first processor 110, a display 111, a camera 112, a player 114 and a first communication bus 113, wherein the first receiver 17, the first transmitter 18, the first memory 19, the display 111, the camera 112, the player 114 and the first processor 110 are connected through the first communication bus 113;
the display 111 is configured to start an application interface according to the start touch instruction, and display a virtual character group corresponding to the first communication object and a virtual character group formed by corresponding second communication objects in the history record on the application interface;
the first receiver 17 is configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history record;
the first processor 110 calls the synchronous communication related program stored in the first memory 19, and executes: when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object in the application interface; triggering an open voice function in the first communication session of the application interface,
the first transmitter 18 is used for transmitting the first voice data triggered by the first communication object to the second terminal when the voice function is started,
the display 111 is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located;
the player 114 is configured to play the first voice data synchronously.
Optionally, the display 111 is further configured to display, on the application interface, the first virtual character and the virtual character group corresponding to the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object.
Optionally, the camera 112 is configured to acquire, in real time, a face image of the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object and before the application interface displays the first virtual character and the virtual character group corresponding to the first communication object,
the display 111 is further configured to display the facial image in a first display area of the voice communication interface;
the first processor 110 is further configured to identify face feature information of the face image; mapping the face feature information to the face features of the first virtual character;
the first transmitter 18 is further configured to send the facial feature information to the server.
Optionally, the first receiver 17 is configured to receive a communication function touch instruction after the first avatar and the avatar group corresponding to the first communication object are displayed on the application interface,
the display 111 is further configured to call a function selection interface in a second display area of the application interface in response to the communication function touch instruction;
the first receiver 17 is further configured to receive a selection instruction on the function selection interface,
the display 111 is further configured to respond to the selection instruction, and display the functional object selected by the selection instruction and the first virtual character on the voice communication interface in a corresponding manner;
the first transmitter 18 is further configured to transmit the function object to the server.
Optionally, the display 111 is specifically configured to, when the selection instruction is a limb selection instruction, respond to the selection instruction, map the limb object selected by the selection instruction to the first virtual character synchronously, and then display the first virtual character.
Optionally, the display 111 is specifically configured to, when the selection instruction is an expression selection instruction, respond to the selection instruction, call up a function display area corresponding to the first virtual character on the voice communication interface, and display the selected expression object.
Optionally, the first transmitter 18 is specifically configured to send the voice communication request message to the server in response to the communication instruction;
the first receiver 17 is specifically configured to receive a voice interface establishment completion message that the server responds to the communication instruction feedback;
the first transmitter 18 is further specifically configured to send a message for establishing a real-time data channel to the server according to the voice interface establishment completion message;
the first receiver 17 is further specifically configured to receive the permission communication message that is fed back by the server in response to the real-time data channel message, where the permission communication message is used to characterize that the voice communication request is permitted.
It can be understood that, since the communication of the first communication session in the history can be selected in the current communication interface, and the communication connection between the first terminal and the server is established by sending the voice communication request message to the server, when the communication connection is completed, that is, the first terminal receives the communication permission message, the first terminal can enter the voice communication interface and can see which communication objects are in the voice communication interface, so that when the voice function corresponding to the first terminal or the first communication object is opened, the voice call is performed on other communication objects, and the first terminal synchronizes the voice broadcast identifier of the first communication session to the second terminal through the server, so that the second terminal can display which communication session has the voice communication ongoing, so that the first terminal provides an autonomous selection for establishing the communication connection with the server, and a mechanism for independently carrying out voice communication is also provided, the establishment of synchronous communication and the realization of synchronous communication can be flexibly carried out, and a new realization form that the virtual character represents the identity of a communication object is provided in a voice communication interface, so that the man-machine interaction performance is improved.
As shown in fig. 18, an embodiment of the present invention provides a server, which may include: a second receiver 24, a second transmitter 25, a second processor 26, a second memory 27, and a second communication bus 28, wherein the second receiver 24, the second transmitter 25, the second memory 27, and the second processor 26 are connected through the second communication bus 28; the second processor 26 is configured to invoke the synchronous communication related program stored in the second memory 27.
The second receiver 24 is configured to receive a voice communication request message for a first communication session, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of the communication sessions in the history record added by the first communication object;
the second processor 26 is configured to, when the application interface corresponding to the identifier of the first communication session is not found, establish an application interface corresponding to the identifier of the first communication session according to the voice communication request message, and generate a voice interface establishment completion message;
the second transmitter 25 is configured to send the voice interface establishment completion message to the first terminal;
the second receiver 24 is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing a real-time data channel;
the second transmitter 25 is further configured to transmit a communication permission message to the first terminal when the real-time data channel is established.
Optionally, the second receiver 24 is further configured to receive the first voice data sent by the first terminal after the communication permission message is sent to the first terminal,
the second sender 25 is further configured to forward the first voice data to a second terminal, where the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in a history record;
the second receiver 24 is further configured to receive the facial feature information sent by the first terminal after the communication permission message is sent to the first terminal,
the second transmitter 25 is further configured to forward the facial feature information to the second terminal;
the second receiver 24 is further configured to receive the function object sent by the first terminal after the communication permission message is sent to the first terminal,
the second sender 25 is further configured to forward the function object to the second terminal.
It can be understood that, in the embodiment of the present invention, the server may provide the functions of establishing real-time communication and forwarding communication data for the first terminal and the second terminal, so that the first terminal and the second terminal can autonomously join a communication session and receive and send real-time data such as voice data, the establishment of synchronous communication and the implementation of synchronous communication can be flexibly performed, and the human-computer interaction performance is improved.
In practical applications, the Memory may be a volatile Memory (volatile Memory), such as a Random-Access Memory (RAM); or a non-volatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD), or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic devices used to implement the processor functions described above may be other devices, and embodiments of the present invention are not limited in particular.
EXAMPLE six
Each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a computer-readable storage medium, which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiment of the invention provides a first computer readable storage medium, which is applied to a first terminal, and the first computer readable storage medium stores one or more programs, and the one or more programs can be executed by one or more first processors to realize the methods of the first embodiment and the third embodiment.
A second computer-readable storage medium is provided in an embodiment of the present invention, and is applied to a server, where the second computer-readable storage medium stores one or more programs, and the one or more programs are executable by one or more second processors to implement the methods in the first and third embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (19)

1. A synchronous communication method is applied to a first terminal and comprises the following steps:
starting an application interface according to a starting touch instruction, and displaying a virtual character group corresponding to a first communication object and a second communication object in a historical record on the application interface;
receiving a communication instruction, and triggering a voice communication request according to the communication instruction so as to add the first communication object into any one communication session in the history record; the voice communication request is used for requesting a server to allow the first terminal to join the first communication session indicated by the communication instruction;
when the voice communication request is allowed, completing the establishment of a first communication session of the first communication object and the second communication object in the application interface;
triggering and starting a voice function in the first communication session of the application interface, sending first voice data triggered by the first communication object to a second terminal when the voice function is started, and synchronously displaying the current voice state and voice broadcast identification of the established communication session in the application interface where the first communication object and the second communication object are located; the voice state and the voice broadcast identification are used for displaying the communication session of the voice call and the communication object of the voice call at the second terminal, so that the second terminal can select the voice communication.
2. The method of claim 1, wherein after the application interface completes establishment of the first communication session between the first communication object and the second communication object, the method further comprises:
and displaying the first virtual character and the virtual character group corresponding to the first communication object on the application interface.
3. The method of claim 2, wherein after the application interface completes establishment of the first communication session between the first communication object and the second communication object and before the application interface displays the first avatar and the avatar group corresponding to the first communication object, the method further comprises:
acquiring a face image of the first communication object in real time, and displaying the face image in a first display area of the application interface;
identifying face feature information of the face image;
mapping the face feature information to the face features of the first virtual character;
and sending the face feature information to the server.
4. The method of claim 2, wherein after the application interface displays the first avatar and the avatar group corresponding to the first communication object, the method further comprises:
receiving a communication function touch instruction, responding to the communication function touch instruction, and calling a function selection interface in a second display area of the application interface;
receiving a selection instruction on the function selection interface, responding to the selection instruction, and correspondingly displaying the function object selected by the selection instruction and the first virtual character on the voice communication interface;
and sending the functional object to the server.
5. The method according to claim 4, wherein the displaying, in response to the selection instruction, the functional object selected by the selection instruction on the voice communication interface in correspondence with the first virtual character comprises:
and when the selection instruction is a limb selection instruction, responding to the selection instruction, synchronously mapping the limb object selected by the selection instruction to the first virtual character, and then displaying the first virtual character.
6. The method according to claim 4, wherein the displaying, in response to the selection instruction, the functional object selected by the selection instruction on the voice communication interface in correspondence with the first virtual character comprises:
and when the selection instruction is an expression selection instruction, responding to the selection instruction, calling a function display area corresponding to the first virtual character on the voice communication interface, and displaying the selected expression object.
7. The method of claim 1, wherein triggering a voice communication request according to the communication instruction comprises:
responding to the communication instruction, and sending the voice communication request message to the server;
receiving a voice interface establishment completion message fed back by the server in response to the communication instruction;
according to the voice interface establishment completion message, sending a message for establishing a real-time data channel to the server;
and receiving a permission communication message fed back by the server in response to the real-time data channel message, wherein the permission communication message is used for representing that the voice communication request is permitted.
8. A synchronous communication method, applied in a server, comprising:
receiving a voice communication request message for a first communication session sent by a first terminal, wherein the voice communication request message carries an identifier of the first communication session, and the first communication session is any one communication session in a history record added by a first communication object;
when the application interface corresponding to the identifier of the first communication session is not found, establishing an application interface corresponding to the identifier of the first communication session according to the voice communication request message, and generating a voice interface establishment completion message;
sending the voice interface establishment completion message to the first terminal;
receiving a message for establishing a real-time data channel sent by the first terminal, and establishing a real-time data channel with the first terminal according to the message for establishing the real-time data channel;
when the real-time data channel is established, sending a communication permission message to the first terminal; the real-time data channel is used for the first communication object to carry out real-time communication with the second communication object in the first conversation session.
9. The method of claim 8, wherein after sending the communication-allowed message to the first terminal, the method further comprises:
receiving first voice data sent by the first terminal, and forwarding the first voice data to a second terminal, wherein the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in a history record;
receiving the face feature information sent by the first terminal, and forwarding the face feature information to the second terminal;
and receiving the functional object sent by the first terminal, and forwarding the functional object to the second terminal.
10. A first terminal, comprising:
the display unit is used for starting an application interface according to a starting touch instruction, displaying a corresponding first communication object on the application interface and a virtual character group formed by corresponding second communication objects in a historical record;
a first receiving unit, configured to receive a communication instruction, and trigger a voice communication request according to the communication instruction, so as to add the first communication object to any one of the communication sessions in the history record; the voice communication request is used for requesting a server to allow the first terminal to join the first communication session indicated by the communication instruction;
the communication unit is used for completing the establishment of a first communication session of the first communication object and the second communication object on the application interface when the voice communication request is allowed;
a starting unit, configured to trigger starting of a voice function in the first communication session of the application interface,
a first sending unit, configured to send first voice data triggered by the first communication object to a second terminal when the voice function is turned on,
the display unit is further configured to synchronously display a current voice state and a voice broadcast identifier of the established communication session in the application interface where the first communication object and the second communication object are located; the voice state and the voice broadcast identification are used for displaying the communication session of the voice call and the communication object of the voice call at the second terminal, so that the second terminal can select the voice communication.
11. The terminal of claim 10,
the display unit is further configured to display, on the application interface, a first virtual character and the virtual character group corresponding to the first communication object after the application interface completes establishment of the first communication session between the first communication object and the second communication object.
12. The terminal of claim 11, wherein the first terminal further comprises: the device comprises an acquisition unit, an identification unit and a mapping unit;
the acquisition unit is used for acquiring the face image of the first communication object in real time after the establishment of the first communication session between the first communication object and the second communication object is completed by the application interface and before the first virtual character and the virtual character group corresponding to the first communication object are displayed by the application interface,
the display unit is further used for displaying the face image in a first display area of the application interface;
the identification unit is used for identifying the face feature information of the face image;
the mapping unit is used for mapping the face feature information to the face features of the first virtual character;
the first sending unit is further configured to send the face feature information to the server.
13. The terminal of claim 11,
the first receiving unit is used for receiving a communication function touch instruction after the first virtual character and the virtual character group corresponding to the first communication object are displayed on the application interface,
the display unit is further used for responding to the communication function touch instruction and calling a function selection interface in a second display area of the application interface;
the first receiving unit is further used for receiving a selection instruction on the function selection interface,
the display unit is further used for responding to the selection instruction, and displaying the functional object selected by the selection instruction and the first virtual character on the voice communication interface correspondingly;
the first sending unit is further configured to send the functional object to the server.
14. The terminal of claim 13,
the display unit is specifically configured to respond to the selection instruction when the communication function touch instruction is a limb touch instruction, map a limb object selected by the selection instruction to the first virtual character synchronously, and display the first virtual character.
15. The terminal of claim 10,
the first sending unit is specifically configured to send the voice communication request message to the server in response to the communication instruction;
the first receiving unit is specifically configured to receive a voice interface establishment completion message that is fed back by the server in response to the communication instruction;
the first sending unit is further specifically configured to send a message for establishing a real-time data channel to the server according to the message for establishing the voice interface;
the first receiving unit is further specifically configured to receive a permission communication message that the server responds to the real-time data channel message feedback, where the permission communication message is used to characterize that the voice communication request is permitted.
16. A server, comprising:
a second receiving unit, configured to receive a voice communication request message for a first communication session, where the voice communication request message carries an identifier of the first communication session, and the first communication session is any one of communication sessions in a history record joined by a first communication object;
an establishing unit, configured to establish, according to the voice communication request message, an application interface corresponding to the identifier of the first communication session when the application interface corresponding to the identifier of the first communication session is not found,
the generating unit is used for generating a voice interface establishment completion message;
a second sending unit, configured to send the voice interface establishment completion message to the first terminal;
the second receiving unit is further configured to receive a message for establishing a real-time data channel sent by the first terminal, and establish a real-time data channel with the first terminal according to the message for establishing the real-time data channel;
the second sending unit is further configured to send a communication permission message to the first terminal when the real-time data channel is established; the real-time data channel is used for the first communication object to carry out real-time communication with the second communication object in the first conversation session.
17. The server according to claim 16,
the second receiving unit is further configured to receive the first voice data sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit is further configured to forward the first voice data to a second terminal, where the second terminal is a terminal device corresponding to an online second communication object in the first communication session, and the second communication object is a communication object in a history record;
the second receiving unit is further configured to receive the facial feature information sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit is further configured to forward the face feature information to the second terminal;
the second receiving unit is further configured to receive the function object sent by the first terminal after the communication permission message is sent to the first terminal,
the second sending unit is further configured to forward the function object to the second terminal.
18. A first computer readable storage medium, wherein one or more programs are stored in the first computer readable storage medium, the one or more programs being executable by one or more first processors to perform the synchronous communication method of any one of claims 1-7.
19. A second computer-readable storage medium, characterized in that one or more programs are stored in the second computer-readable storage medium, which one or more programs are executable by one or more second processors to perform the synchronous communication method according to any one of claims 8 to 9.
CN201710744130.XA 2017-08-25 2017-08-25 Synchronous communication method, terminal and server Active CN109428859B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710744130.XA CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server
CN202210044304.2A CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710744130.XA CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210044304.2A Division CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Publications (2)

Publication Number Publication Date
CN109428859A CN109428859A (en) 2019-03-05
CN109428859B true CN109428859B (en) 2022-01-11

Family

ID=65500295

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710744130.XA Active CN109428859B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and server
CN202210044304.2A Active CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210044304.2A Active CN114244816B (en) 2017-08-25 2017-08-25 Synchronous communication method, terminal and readable storage medium

Country Status (1)

Country Link
CN (2) CN109428859B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10616151B1 (en) 2018-10-17 2020-04-07 Asana, Inc. Systems and methods for generating and presenting graphical user interfaces
CN110384933B (en) * 2019-08-26 2023-08-11 网易(杭州)网络有限公司 Deployment control method and device for virtual objects in game
CN111179317A (en) * 2020-01-04 2020-05-19 阔地教育科技有限公司 Interactive teaching system and method
CN113765756A (en) * 2020-06-02 2021-12-07 云米互联科技(广东)有限公司 Communication method of home terminal, terminal and storage medium
CN111986297A (en) * 2020-08-10 2020-11-24 山东金东数字创意股份有限公司 Virtual character facial expression real-time driving system and method based on voice control
US11769115B1 (en) * 2020-11-23 2023-09-26 Asana, Inc. Systems and methods to provide measures of user workload when generating units of work based on chat sessions between users of a collaboration environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277855B1 (en) * 2000-06-30 2007-10-02 At&T Corp. Personalized text-to-speech services
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN103856386A (en) * 2012-11-28 2014-06-11 腾讯科技(深圳)有限公司 Information interaction method, system, server and instant messaging client
CN104937545A (en) * 2012-10-26 2015-09-23 多音可可株式会社 Method for operating application providing group call service using mobile voice over internet protocol
CN105991418A (en) * 2015-02-16 2016-10-05 阿里巴巴集团控股有限公司 Communication method, device, server and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI439960B (en) * 2010-04-07 2014-06-01 Apple Inc Avatar editing environment
CN103391205B (en) * 2012-05-08 2017-06-06 阿里巴巴集团控股有限公司 The sending method of group communication information, client
CN105407408B (en) * 2014-09-11 2019-08-16 腾讯科技(深圳)有限公司 A kind of method and mobile terminal for realizing more people's audio-videos in mobile terminal
CN105577653B (en) * 2015-12-17 2019-05-21 小米科技有限责任公司 Establish the method and device of video calling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277855B1 (en) * 2000-06-30 2007-10-02 At&T Corp. Personalized text-to-speech services
CN103368816A (en) * 2012-03-29 2013-10-23 深圳市腾讯计算机系统有限公司 Instant communication method based on virtual character and system
CN104937545A (en) * 2012-10-26 2015-09-23 多音可可株式会社 Method for operating application providing group call service using mobile voice over internet protocol
CN103856386A (en) * 2012-11-28 2014-06-11 腾讯科技(深圳)有限公司 Information interaction method, system, server and instant messaging client
CN105991418A (en) * 2015-02-16 2016-10-05 阿里巴巴集团控股有限公司 Communication method, device, server and electronic device

Also Published As

Publication number Publication date
CN109428859A (en) 2019-03-05
CN114244816A (en) 2022-03-25
CN114244816B (en) 2023-02-21

Similar Documents

Publication Publication Date Title
CN109428859B (en) Synchronous communication method, terminal and server
EP3713159B1 (en) Gallery of messages with a shared interest
US20160232402A1 (en) Methods and devices for querying and obtaining user identification
US11504636B2 (en) Games in chat
CN111835531B (en) Session processing method, device, computer equipment and storage medium
CN108513088B (en) Method and device for group video session
CN113014471A (en) Session processing method, device, terminal and storage medium
WO2021213057A1 (en) Help-seeking information transmitting method and apparatus, help-seeking information responding method and apparatus, terminal, and storage medium
WO2018094911A1 (en) Multimedia file sharing method and terminal device
TW202008753A (en) Method and apparatus for sending message, and electronic device
CN113350802A (en) Voice communication method, device, terminal and storage medium in game
CN111569436A (en) Processing method, device and equipment based on interaction in live broadcast fighting
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN109150690B (en) Interactive data processing method and device, computer equipment and storage medium
CN108880975B (en) Information display method, device and system
CN112423011B (en) Message reply method, device, equipment and storage medium
CN112449098B (en) Shooting method, device, terminal and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN113518198B (en) Session interface display method, conference interface display method, device and electronic equipment
CN112820265B (en) Speech synthesis model training method and related device
CN110677723B (en) Information processing method, device and system
CN106656725B (en) Intelligent terminal, server and information updating system
CN114327197A (en) Message sending method, device, equipment and medium
US20230362333A1 (en) Data processing method and apparatus, device, and readable storage medium
CN115361588B (en) Object display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant