WO2021036561A1 - 一种在视频通话过程中传递信息的方法与设备 - Google Patents

一种在视频通话过程中传递信息的方法与设备 Download PDF

Info

Publication number
WO2021036561A1
WO2021036561A1 PCT/CN2020/102254 CN2020102254W WO2021036561A1 WO 2021036561 A1 WO2021036561 A1 WO 2021036561A1 CN 2020102254 W CN2020102254 W CN 2020102254W WO 2021036561 A1 WO2021036561 A1 WO 2021036561A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
terminal
information
video call
user
Prior art date
Application number
PCT/CN2020/102254
Other languages
English (en)
French (fr)
Inventor
王寒莹
Original Assignee
上海盛付通电子支付服务有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海盛付通电子支付服务有限公司 filed Critical 上海盛付通电子支付服务有限公司
Publication of WO2021036561A1 publication Critical patent/WO2021036561A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N2007/145Handheld terminals

Definitions

  • This application relates to the field of communications, and in particular to a technology for transferring information during a video call.
  • video calls through mobile devices (for example, mobile phones, tablets).
  • mobile devices for example, mobile phones, tablets.
  • real-time voice information and image information can be transmitted, which facilitates real-time communication between users.
  • the basic functions of a video call include displaying video images of both users, minimizing the video interface, switching cameras, and ending a video call.
  • One purpose of this application is to provide a method and equipment for transferring information during a video call.
  • a method for transferring information during a video call which is applied to a first terminal, and the method includes:
  • the first touch track corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, and Generating touch sequence information corresponding to the touch operation in real time, where the touch sequence information matches the first touch trajectory;
  • the touch sequence information is sent to the second terminal through the long connection.
  • a method for transmitting information during a video call which is applied to a second terminal, and the method includes:
  • the first terminal sends Touch sequence information
  • the first touch track corresponding to the touch sequence information is displayed on the video call interface in real time.
  • a method for transferring information during a video call including:
  • the first terminal In response to detecting a message trigger operation by the first user during the video call with the second user, the first terminal establishes a long connection between the first terminal and the second terminal used by the second user;
  • the first touch track corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, so The first terminal generates touch sequence information corresponding to the touch operation in real time, the touch sequence information matches the first touch trajectory, and sends the touch sequence information to The second terminal.
  • the second terminal receives the touch sequence information sent by the first terminal, and based on the touch sequence information, displays the first touch trajectory corresponding to the touch sequence information on the video call interface in real time.
  • a first terminal for transferring information during a video call includes:
  • the one-two module is configured to respond to the first user's touch operation on the video call interface of the first terminal, and display the first touch operation corresponding to the touch operation on the video call interface of the first terminal in real time A touch trajectory, and real-time generation of touch sequence information corresponding to the touch operation, and the touch sequence information matches the first touch trajectory;
  • the first three modules are used to send the touch sequence information to the second terminal through the long connection.
  • a second terminal for transferring information during a video call includes:
  • the two-to-one module is used to receive the video based on the long connection between the second terminal and the first terminal during the video passing process between the second user using the second terminal and the first user using the first terminal
  • the touch sequence information sent by the first terminal
  • the 22nd module is configured to display the first touch track corresponding to the touch sequence information in real time on the video call interface based on the touch sequence information.
  • a first terminal for transferring information during a video call wherein the device includes:
  • a memory arranged to store computer-executable instructions which, when executed, cause the processor to execute:
  • the first touch track corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, and Generating touch sequence information corresponding to the touch operation in real time, where the touch sequence information matches the first touch trajectory;
  • the touch sequence information is sent to the second terminal through the long connection.
  • a second terminal for transferring information during a video call wherein the device includes:
  • a memory arranged to store computer-executable instructions which, when executed, cause the processor to execute:
  • the first terminal sends Touch sequence information
  • the first touch track corresponding to the touch sequence information is displayed on the video call interface in real time.
  • a computer-readable medium storing instructions, which when executed, cause the system to perform:
  • the first touch track corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, and Generating touch sequence information corresponding to the touch operation in real time, where the touch sequence information matches the first touch trajectory;
  • the touch sequence information is sent to the second terminal through the long connection.
  • a computer-readable medium storing instructions that, when executed, cause the system to perform:
  • the first terminal sends Touch sequence information
  • the first touch track corresponding to the touch sequence information is displayed on the video call interface in real time.
  • the present application establishes a long connection between the first terminal and the second terminal used by the second user on the basis of the first terminal, and according to the video of the first user on the first terminal
  • the touch operation on the call interface displays the first touch track corresponding to the touch operation in real time on the video call interface of the first terminal, and generates the touch sequence information corresponding to the touch operation in real time, and then
  • the touch sequence information is sent to the second terminal through the long connection.
  • communication through a long connection can ensure subsequent real-time transmission of touch sequence information.
  • This application can transmit touch track information to the other user in real time without affecting the user's video call, and does not limit the form of touch track information. , To enhance the effectiveness and flexibility of information transmission, and enhance the user experience.
  • Figure 1a shows a schematic diagram of a scene according to the present application
  • Figure 1b shows a schematic diagram of another scenario according to the present application.
  • Figure 2 shows a flow chart of a system for transferring information during a video call according to an embodiment of the present application
  • FIG. 3 shows a flowchart of a method for transmitting information during a video call according to another embodiment of the present application, which is applied to a first terminal;
  • FIG. 4 shows a flowchart of a method for transmitting information during a video call according to another embodiment of the present application, which is applied to a second terminal;
  • FIG. 5 shows a flowchart of a method for transmitting information during a video call according to another embodiment of the present application
  • Fig. 6 shows a schematic diagram of a device of a first terminal that transmits information during a video call according to an embodiment of the present application
  • FIG. 7 shows a schematic diagram of a second terminal device that transmits information during a video call according to another embodiment of the present application
  • FIG. 8 shows a schematic diagram of a system device for transmitting information during a video call according to another embodiment of the present application.
  • Figure 9 shows an exemplary system that can be used to implement the various embodiments described in this application.
  • the terminal, the equipment of the service network, and the trusted party all include one or more processors (for example, a central processing unit (CPU)), input/output interfaces, network interfaces, and RAM.
  • processors for example, a central processing unit (CPU)
  • Memory may include non-permanent memory in computer-readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (Read Only Memory). Flash Memory). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read only memory
  • Flash Memory Flash Memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (Electrically-Erasable Programmable Read -Only Memory, EEPROM), flash memory or other memory technologies, CD-ROM (Compact Disc Read-Only Memory, CD-ROM), Digital Versatile Disc (DVD) or other optical storage , Magnetic tape, magnetic tape disk storage or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
  • PCM Phase-Change Memory
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment formed by the integration of user equipment and network equipment through a network.
  • the user equipment includes, but is not limited to, any mobile electronic product that can perform human-computer interaction with the user (for example, human-computer interaction through a touch panel), such as a smart phone, a tablet computer, etc., and the mobile electronic product can adopt any operation System, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculation and information processing in accordance with pre-set or stored instructions, and its hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC) ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • the network device includes, but is not limited to, a computer, a network host, a single network server, a set of multiple network servers, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on Cloud Computing, Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and a wireless ad-hoc network (Ad Hoc network).
  • the device may also be a program running on the user equipment, network equipment, or user equipment and network equipment, network equipment, touch terminal, or a device formed by integrating network equipment and touch terminal through a network.
  • Figure 1a shows a typical scenario of the present application.
  • the first user holds the first terminal.
  • the first terminal and the second terminal held by the second user are currently maintaining a video call connection.
  • the first terminal establishes a long connection between the first terminal and a second terminal used by the second user.
  • the message triggering operation includes but is not limited to In the interface, the trigger operation of the preset button, the predetermined gesture operation (for example, sliding up and down, left and right, etc.), and the voice keyword trigger operation in the video call process
  • the long connection is essentially a TCP long connection
  • the http protocol is set to connection: Keep-alive, keeping a long connection is to speed up the speed of network content delivery.
  • the first terminal On the basis of the establishment of the long connection, in response to the first user's touch operation on the video call interface of the first terminal, the first terminal records the touch track in real time (for example, "love sliding track” ), and display the touch track corresponding to the touch operation in real time on the video call interface of the first terminal, and generate the touch sequence information corresponding to the touch operation in real time, and then the first terminal will generate the touch
  • the control sequence information is sent to the second terminal through the long connection.
  • Figure 1b shows another typical scenario of the present application.
  • the second terminal receives the touch sequence information, and based on the touch sequence information, displays the touch track corresponding to the touch sequence information in real time on the video call interface. (For example, the corresponding "Love Touch Track")
  • the first terminal and the second terminal include, but are not limited to, computing devices with touch screens such as mobile phones and tablets.
  • Fig. 2 shows a method for transferring information during a video call according to an embodiment of the present application, where the method includes:
  • the first terminal In response to detecting a message trigger operation by the first user during the video call with the second user, the first terminal establishes a long connection between the first terminal and the second terminal used by the second user;
  • the first touch track corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, so The first terminal generates touch sequence information corresponding to the touch operation in real time, the touch sequence information matches the first touch trajectory, and sends the touch sequence information to The second terminal.
  • the second terminal receives the touch sequence information sent by the first terminal, and based on the touch sequence information, displays the first touch trajectory corresponding to the touch sequence information on the video call interface in real time.
  • Fig. 3 shows a method for transferring information during a video call according to an embodiment of the present application, which is applied to a first terminal, and the method includes step S101, step S102, and step S103.
  • step S101 the first terminal establishes a long-distance connection between the first terminal and the second terminal used by the second user in response to detecting that the first user triggers an operation during the video call with the second user.
  • step S102 the first terminal responds to the first user's touch operation on the video call interface of the first terminal, and displays the touch in real time on the video call interface of the first terminal Operate the corresponding touch trajectory, and generate the touch sequence information corresponding to the touch operation in real time, and the touch sequence information matches the touch trajectory; in step S103, the first terminal uses the long connection Sending the touch sequence information to the second terminal.
  • the first terminal establishes the second terminal used by the first terminal and the second user in response to detecting that the first user triggers an operation during the video call with the second user.
  • the message trigger operation includes, but is not limited to, the trigger operation of the preset button in the video call interface, the predetermined gesture operation (for example, touch up, down, left, and right, etc.), and the voice keyword trigger operation during the video call.
  • the terminal sets the http protocol to connection: keep-alive, where the long connection is essentially a TCP long connection, and the purpose of maintaining the long connection is to speed up the transmission of network content.
  • the establishing a long connection between the first terminal and the second terminal used by the second user includes: sending to a server corresponding to the first terminal to establish the connection between the first terminal and the second terminal.
  • the instruction information is not restricted by the first long connection between the first terminal and the server.
  • the server actively establishes the connection with the first terminal and the second terminal, which improves the efficiency of the long connection.
  • the long connection between the first terminal and the second terminal constructs the first long connection and the second long connection through the intermediate server, and then the first long connection and the second long connection are bound to form the first terminal and the second long connection.
  • the long connection between the terminals, the construction of the long connection between the first terminal and the second terminal through the intermediate server can ensure that the subsequent touch sequence information can also be transmitted in real time when the video call between the first terminal and the second terminal is smooth.
  • the method further includes step S104 (not shown).
  • a first long connection between the first terminal and the server is established;
  • the server corresponding to the terminal sends instruction information for establishing a persistent connection between the first terminal and the second terminal used by the second user, including: sending information about the second persistent connection to the server through the first persistent connection
  • the establishment and binding instruction wherein the server establishes the second long connection between the second terminal and the server according to the establishment and binding instruction, and connects the first long connection to the
  • the second long connection is bound to establish a long connection between the first terminal and the second terminal.
  • the server receives the establishment and binding instruction of the second persistent connection sent by the first terminal, and establishes the first persistent connection between the intermediate server and the first terminal.
  • a long connection and the establishment of a second long connection with the second terminal it provides a basis for the subsequent establishment of a long connection between the first terminal and the second terminal.
  • step S102 in response to the first user's touch operation on the video call interface of the first terminal, the first terminal displays the corresponding touch operation on the video call interface of the first terminal in real time. And generate the touch sequence information corresponding to the touch operation in real time, and the touch sequence information matches the touch trajectory.
  • the touch operation includes, but is not limited to, the point operation of the user in the video call interface, and the movement operation of up, down, left and right.
  • the first terminal responds to a message triggering operation of the first user during a video call with the second user, and the first terminal records all information in real time according to the first user's touch operation on the video call interface of the first terminal
  • the touch sequence information corresponding to the touch operation is then displayed in real time on the video call interface of the first terminal, where the trajectory drawn by the touch sequence information substantially overlaps the touch trajectory.
  • the touch sequence information includes at least one of the following:
  • the touch sequence information includes predetermined character string information, and the predetermined character string includes but is not limited to json and an array.
  • the path information includes that the first terminal presets the initial point position, and then uses the initial point position as the starting point coordinates to obtain the track coordinates of the current touch track, which is used as the path information.
  • the touch attribute information includes, but is not limited to, the width, color, and line thickness of the touch track.
  • the touch sequence information is the basis for the subsequent display of the touch track by the second terminal.
  • the method further includes step S105 (not shown).
  • step S105 the first terminal obtains the voice information of the first user during the video call; in step S102, In response to detecting the touch operation of the first user on the video call interface of the first terminal, a terminal generates a second touch trajectory corresponding to the touch operation; and corrects the first touch according to the voice information.
  • the second touch trajectory is used to generate the first touch trajectory corresponding to the touch operation; the first touch trajectory corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, and is based on the first touch operation.
  • a touch trajectory generates real-time touch sequence information corresponding to the touch operation, and the touch sequence information matches the touch trajectory.
  • the first terminal detects a touch operation of the first user on the video call interface of the first terminal, it generates the corresponding touch operation
  • the second touch trajectory is combined with the voice information obtained by the first terminal (for example, "circle”, “draw a circle”, “write a round character for you to see"), and the first terminal according to the similar
  • the second touch trajectory is intelligently corrected, and the first touch trajectory corresponding to the touch operation is generated (for example, a complete circle generated after correcting the second touch trajectory), and a video call is performed on the first terminal
  • the first touch track corresponding to the touch operation is displayed on the interface in real time.
  • the first terminal generates touch sequence information corresponding to the touch operation according to the first touch trajectory (for example, a complete circle generated after correcting the second touch trajectory).
  • the touch trajectory can be presented in a more complete and vivid manner, and the user experience can be improved.
  • the correcting the second touch trajectory according to the voice information to generate the first touch trajectory corresponding to the touch operation includes: extracting keyword information in the voice information; The keyword information corrects the second touch trajectory to generate a first touch trajectory corresponding to the touch operation.
  • the keyword information includes at least any one of the following: predetermined graphic keywords (for example, circle, square, curve); predetermined action keywords (for example, painting, painting, etc.); predetermined behavior keywords (For example, draw a circle, draw a square).
  • predetermined graphic keywords for example, circle, square, curve
  • predetermined action keywords for example, painting, painting, etc.
  • predetermined behavior keywords For example, draw a circle, draw a square.
  • the correcting the second touch trajectory according to the keyword information to generate the first touch trajectory corresponding to the touch operation includes: detecting predetermined noun information in the keyword information Rectify the second touch trajectory according to the predetermined noun information to generate a first touch trajectory corresponding to the touch operation, wherein the predetermined noun information matches the first touch trajectory.
  • the first terminal obtains keyword information in the voice information, and then detects whether there is predetermined noun information (for example, "ellipse", etc.) in the keyword information, and then corrects the second touch according to the predetermined noun information.
  • the trajectory information is controlled to generate the first touch trajectory corresponding to the touch operation (for example, the second touch trajectory that is currently blurred by the stylus is adjusted to the “elliptical” first touch trajectory).
  • the touch trajectory can be presented in a more complete and vivid manner, and the user experience can be improved.
  • the correcting the second touch trajectory according to the predetermined noun information to generate the first touch trajectory corresponding to the touch operation includes: determining corresponding trajectory information according to the predetermined noun information Rectify the second touch trajectory through the trajectory information to generate a first touch trajectory corresponding to the touch operation.
  • the predetermined noun information includes shape information (for example, "ellipse”), object information (for example, "box”), and the like.
  • the first terminal determines the corresponding trajectory information (for example, elliptical trajectory) according to predetermined noun information (for example, "elliptical shape”), and corrects the second touch trajectory with reference to the trajectory shape of the elliptical trajectory to generate the touch Operate the corresponding first touch track.
  • the touch trajectory can be presented in a more complete and vivid manner, and the user experience can be improved.
  • step S103 the first terminal sends the touch sequence information to the second terminal through the long connection.
  • the first terminal obtains the newly generated touch sequence information by using a heartbeat detection rule; and sends the newly generated touch sequence information to the first terminal through the long connection.
  • the heartbeat detection rule includes that the first terminal obtains newly generated touch sequence information every predetermined time period threshold. Under the premise that the touch sequence information is obtained in real time through the heartbeat detection rule, the first terminal can update the new touch sequence information in real time. The generated touch sequence information is sent to the second terminal, so that the second terminal can present the touch track in real time.
  • the first user holds the first terminal, and the current first terminal and the second terminal held by the second user are maintaining a video call connection.
  • the first terminal establishes a long connection between the first terminal and the second terminal used by the second user, and responds to the first user’s touch on the video call interface of the first terminal.
  • Control operation the first terminal records the touch trajectory in real time, and displays the touch trajectory corresponding to the touch operation (for example, the corresponding "line touch trajectory") on the video call interface of the first terminal in real time, and
  • the touch sequence information corresponding to the touch operation is generated in real time.
  • the touch sequence information is expressed in the form of a json string or an array (for example, ⁇ xyz:'0,0,0',color:'#fffff' ,weight: '1px', path: '000,001,002,120' ⁇ ), and then the first terminal sends the generated touch sequence information to the second terminal through the long connection.
  • the second terminal receives the touch sequence information, and based on the touch sequence information, displays the touch trajectory corresponding to the touch sequence information (for example, the corresponding "line touch trajectory") on the video call interface in real time.
  • the method further includes step S106 (not shown).
  • step S106 if it is not detected that the first user is on the video call interface of the first terminal within a predetermined time threshold, In a touch operation, the first terminal hides the first touch track on the video call interface of the first terminal. For example, if the first user's touch operation on the video call interface of the first terminal is not detected within a predetermined time threshold (for example, 2s) (for example, the user has not touched the video call interface for 2s) , The first terminal confirms the end of the touch operation, and sets the first touch track to gradually disappear until hidden. In this case, give users an intuitive experimental experience.
  • FIG. 4 shows a method for transferring information during a video call according to an embodiment of the present application, which is applied to a second terminal, and the method includes step S201 and step S202.
  • step S201 the second terminal is based on the long connection between the second terminal and the first terminal during the video call between the second user using the second terminal and the first user using the first terminal , Receiving the touch sequence information sent by the first terminal; in step S202, the second terminal displays the touch trajectory corresponding to the touch sequence information on the video call interface in real time based on the touch sequence information.
  • step S201 during the video call between the second user using the second terminal and the first user using the first terminal, the second terminal is based on the communication between the second terminal and the first terminal.
  • the long connection is to receive the touch sequence information sent by the first terminal.
  • the long connection is essentially a TCP long connection. Keeping the long connection is to speed up network content transmission.
  • the first terminal sets the http protocol to connection: keep-alive to establish a long connection with the second terminal, and then passes the The long connection sends the touch sequence information to the second terminal.
  • the touch sequence information is the first terminal in response to the first user's touch operation on the video call interface of the first terminal, The touch sequence information corresponding to the touch operation generated according to the touch trajectory.
  • step S202 the second terminal displays the touch track corresponding to the touch sequence information in real time on the video call interface based on the touch sequence information.
  • the touch sequence information includes path information in a predetermined character string and touch attribute information in the predetermined character string.
  • the second terminal draws a touch trajectory and presents the touch trajectory according to the path and attribute information included in the touch sequence information. Improve the effectiveness and flexibility of information transmission, and enhance the user experience.
  • the first user holds the first terminal, and the current first terminal and the second terminal held by the second user are maintaining a video call connection, in response to the first user’s pairing of the preset during the video call with the second user
  • the first terminal establishes a long connection between the first terminal and the second terminal used by the second user, and responds to the first user’s touch on the video call interface of the first terminal.
  • Control operation the first terminal records the touch trajectory in real time, and displays the touch trajectory corresponding to the touch operation (for example, the corresponding "line touch trajectory") on the video call interface of the first terminal in real time, and
  • the touch sequence information corresponding to the touch operation is generated in real time.
  • the touch sequence information is expressed in the form of a json string or an array (for example, ⁇ xyz:'0,0,0',color:'#fffff' ,weight: '1px', path: '000,001,002,120' ⁇ ), and then the first terminal sends the generated touch sequence information to the second terminal through the long connection.
  • the second terminal receives the touch sequence information, and based on the touch sequence information, displays the touch trajectory corresponding to the touch sequence information (for example, the corresponding "line touch trajectory") on the video call interface in real time.
  • the method further includes step S203 (not shown).
  • step S203 the second terminal obtains the touch sequence information newly generated by the first terminal through the long connection;
  • the connection obtains the touch state information from the first terminal in real time; if the touch state information is in the message enhancement mode, the corresponding newly generated first touch track is presented according to the newly generated touch sequence information, so The newly generated touch sequence information matches the newly generated first touch track.
  • the first terminal obtains the newly generated touch sequence information by using a heartbeat detection rule; and sends the newly generated touch sequence information to the second terminal through the long connection.
  • the second terminal obtains touch state information from the first terminal, where the touch state information includes a message enhancement mode (for example, allowing the user to perform touch operations on the video call interface during a video call to display touch Track) or exit the touch state mode.
  • the touch state information is in the message enhancement mode
  • the corresponding newly generated first touch track is presented according to the newly generated touch sequence information.
  • the second The terminal confirms that the second terminal can present the corresponding newly generated first touch track according to the newly generated touch sequence information.
  • the touch trajectory information of the first terminal is transmitted in real time and the touch trajectory is presented on the second terminal in real time, ensuring synchronization and consistency between the terminals.
  • FIG. 5 shows a flow chart of a method for transmitting information during a video call according to another embodiment of the present application.
  • the first user holds the first terminal, and the current first terminal and the second terminal held by the second user
  • the video call connection is being maintained.
  • the first terminal establishes the first terminal used by the first terminal and the second user.
  • the long connection between the two terminals is maintained and the heartbeat query is maintained, wherein the long connection between the first terminal and the second terminal is established by an intermediate server.
  • the first terminal On the basis of the establishment of the long connection, in response to the first user's touch operation (for example, a drawing operation) on the video call interface of the first terminal, the first terminal records the drawing track in real time (for example, "Circular drawing"), and display the circular trajectory corresponding to the drawing operation in real time on the video call interface of the first terminal, and generate touch sequence information (drawing sequence information) corresponding to the drawing operation in real time, wherein, the drawing sequence information is stored in the form of a json string, and then the first terminal sends the generated drawing sequence information to the intermediate server through the long connection.
  • the second terminal queries the server for drawing status information in real time. If the drawing status information changes, the second terminal obtains the generated drawing sequence information, and performs drawing based on the drawing sequence information (for example, "circular drawing").
  • FIG. 6 shows a first terminal for transferring information during a video call according to an embodiment of the present application.
  • the first terminal includes a one-to-one module 101, a one-to-two module 102, and a one-to-three module 103.
  • the one-to-one module 101 is configured to establish a long connection between the first terminal and the second terminal used by the second user in response to detection of a message trigger operation by the first user during the video call with the second user
  • One or two modules 102 for responding to the first user's touch operation on the video call interface of the first terminal, displaying the corresponding touch operation on the video call interface of the first terminal in real time And generate the touch sequence information corresponding to the touch operation in real time, and the touch sequence information matches the touch trajectory;
  • a third module 103 is used to connect the The touch sequence information is sent to the second terminal.
  • the one-to-one module 101 is configured to establish a connection between the first terminal and the second terminal used by the second user in response to the detection of a message trigger operation by the first user during the video call with the second user.
  • Long connection includes, but is not limited to, the trigger operation of the preset button in the video call interface, the predetermined gesture operation (for example, touch up, down, left, and right, etc.), and the voice keyword trigger operation during the video call.
  • the terminal sets the http protocol to connection: keep-alive, where the long connection is essentially a TCP long connection, and the purpose of maintaining the long connection is to speed up the transmission of network content.
  • the establishing a long connection between the first terminal and the second terminal used by the second user includes: sending a server corresponding to the first terminal to establish the connection between the first terminal and the second terminal.
  • a long connection between terminals, where the first long connection is a long connection between the server and the first terminal, and the second long connection is a long connection between the server and the second terminal. connection.
  • the related operation of establishing a long connection between the first terminal and the second terminal used by the second user is the same as or similar to the embodiment shown in FIG. Included here.
  • the first terminal further includes a four-module 104 (not shown), and a four-module 104 is used to establish a first long connection between the first terminal and the server;
  • the sending to the server corresponding to the first terminal the instruction information for establishing the long connection between the first terminal and the second terminal used by the second user includes: sending information about the long connection to the server through the first long connection The second long connection establishment and binding instruction, wherein the server establishes the second long connection between the second terminal and the server according to the establishment and binding instruction, and connects the first A long connection is bound to the second long connection to establish a long connection between the first terminal and the second terminal.
  • the example of the specific implementation of the above-mentioned four-module 104 is the same as or similar to the embodiment of step S104 in FIG.
  • the one-two module 102 is configured to respond to the first user's touch operation on the video call interface of the first terminal, and display the touch operation corresponding to the touch operation on the video call interface of the first terminal in real time. Touch trajectory, and generate touch sequence information corresponding to the touch operation in real time, and the touch sequence information matches the touch trajectory.
  • the touch operation includes, but is not limited to, the point operation of the user in the video call interface, and the movement operation of up, down, left, and right directions.
  • the first terminal responds to a message triggering operation of the first user during a video call with the second user, and the first terminal records all information in real time according to the touch operation of the first user on the video call interface of the first terminal.
  • the touch sequence information corresponding to the touch operation is then displayed in real time on the video call interface of the first terminal, where the trajectory drawn by the touch sequence information substantially overlaps the touch trajectory.
  • the touch sequence information includes at least one of the following:
  • the first terminal further includes a fifth module 105 (not shown), a fifth module 105 for acquiring voice information of the first user during the video call; a second module 102.
  • a fifth module 105 for acquiring voice information of the first user during the video call
  • a second module 102 In response to detecting the touch operation of the first user on the video call interface of the first terminal, generate a second touch trajectory corresponding to the touch operation; The second touch trajectory is used to generate the first touch trajectory corresponding to the touch operation; the first touch trajectory corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, and is based on the The first touch trajectory generates real-time touch sequence information corresponding to the touch operation, and the touch sequence information matches the touch trajectory.
  • the correcting the second touch trajectory according to the voice information to generate the first touch trajectory corresponding to the touch operation includes: extracting keyword information in the voice information; The keyword information corrects the second touch trajectory to generate a first touch trajectory corresponding to the touch operation.
  • the related operation of correcting the second touch trajectory according to the voice information to generate the first touch trajectory corresponding to the touch operation is the same as or similar to the embodiment shown in FIG. 3, so it will not be repeated here. , Is included here by reference.
  • the correcting the second touch trajectory according to the keyword information to generate the first touch trajectory corresponding to the touch operation includes: detecting predetermined noun information in the keyword information Rectify the second touch trajectory according to the predetermined noun information to generate a first touch trajectory corresponding to the touch operation, wherein the predetermined noun information matches the first touch trajectory.
  • the related operation of correcting the second touch trajectory according to the keyword information to generate the first touch trajectory corresponding to the touch operation is the same as or similar to the embodiment shown in FIG. 3, so it will not be omitted. To repeat, it is included here by reference.
  • the correcting the second touch trajectory according to the predetermined noun information to generate the first touch trajectory corresponding to the touch operation includes: determining corresponding trajectory information according to the predetermined noun information Rectify the second touch trajectory through the trajectory information to generate a first touch trajectory corresponding to the touch operation.
  • the related operation of correcting the second touch trajectory according to the predetermined noun information to generate the first touch trajectory corresponding to the touch operation is the same as or similar to the embodiment shown in FIG. 3, so it will not be omitted. To repeat, it is included here by reference.
  • the one-three module 103 is configured to send the touch sequence information to the second terminal through the long connection.
  • the communication can be kept uninterrupted, which ensures real-time transmission of touch sequence information at any time.
  • the one-three module 103 is configured to obtain the newly generated touch sequence information by using the heartbeat detection rule; and send the newly generated touch sequence information to the second through the long connection terminal.
  • the related operation of using the heartbeat detection rule to obtain the newly generated touch sequence information is the same as or similar to the embodiment shown in FIG. 3, so it will not be repeated here, and it is included here by reference.
  • one-to-one module 101 one-two module 102, and one-three module 103 are the same as or similar to the embodiment of steps S101, S102, and S103 in FIG. 3, so they will not be repeated here. Included here by reference.
  • the first terminal further includes a six-module 106 (not shown), and a six-module 106 is configured to, if the first user is not detected in the first terminal within a predetermined time threshold, The touch operation on the video call interface of the first terminal hides the first touch track on the video call interface of the first terminal.
  • the specific implementation of the 16-module 106 is the same as or similar to the embodiment of the aforementioned step S106, so it will not be repeated here, and it is included here by reference.
  • FIG. 7 shows a second terminal for transferring information during a video call according to an embodiment of the present application.
  • the second terminal includes a two-one module 201 and a two-two module 202.
  • the two-to-one module 201 is configured to, based on the long connection between the second terminal and the first terminal, during the video passing process between the second user using the second terminal and the first user using the first terminal, Receive the touch sequence information sent by the first terminal;
  • the second module 202 is configured to display the touch track corresponding to the touch sequence information on the video call interface in real time based on the touch sequence information.
  • the two-to-one module 201 is used to perform video transmission between the second terminal and the first terminal based on the video transmission process between the second user using the second terminal and the first user using the first terminal.
  • Long connection receiving the touch sequence information sent by the first terminal.
  • the long connection is essentially a TCP long connection. Keeping the long connection is to speed up network content transmission.
  • the first terminal sets the http protocol to connection: keep-alive to establish a long connection with the second terminal, and then passes the The long connection sends the touch sequence information to the second terminal.
  • the touch sequence information is the first terminal in response to the first user's touch operation on the video call interface of the first terminal, The touch sequence information corresponding to the touch operation generated according to the touch trajectory.
  • the 22nd module 202 is configured to display the touch track corresponding to the touch sequence information on the video call interface in real time based on the touch sequence information.
  • the touch sequence information includes path information in a predetermined character string and touch attribute information in the predetermined character string.
  • the second terminal draws a touch trajectory and presents the touch trajectory according to the path and attribute information included in the touch sequence information. Improve the effectiveness and flexibility of information transmission, and enhance the user experience.
  • the specific implementation examples of the two-one module 201 and the two-two module 202 are the same as or similar to the embodiment of steps S201 and S202 in FIG. 4, so they are not repeated here, and are included here by reference.
  • the second terminal further includes a second and third module 203 (not shown), and the second and third module 203 is configured to obtain the touch sequence information newly generated by the first terminal through the long connection;
  • the long connection obtains touch status information from the first terminal in real time; if the touch status information is in a message enhancement mode, the corresponding newly generated first touch is presented according to the newly generated touch sequence information Track, the newly generated touch sequence information matches the newly generated first touch track.
  • the specific implementation of the second and third module 203 is the same as or similar to the foregoing embodiment of step S203, so it will not be repeated here, and it is included here by reference.
  • Fig. 7 shows a system device for transferring information during a video call according to an embodiment of the present application, where the system includes:
  • the first terminal In response to detecting a message trigger operation by the first user during the video call with the second user, the first terminal establishes a long connection between the first terminal and the second terminal used by the second user;
  • the first touch track corresponding to the touch operation is displayed in real time on the video call interface of the first terminal, so The first terminal generates touch sequence information corresponding to the touch operation in real time, the touch sequence information matches the first touch trajectory, and sends the touch sequence information to The second terminal.
  • the second terminal receives the touch sequence information sent by the first terminal, and based on the touch sequence information, displays the first touch trajectory corresponding to the touch sequence information on the video call interface in real time.
  • this application also provides a computer-readable storage medium that stores computer code, and when the computer code is executed, such as any one of the preceding items The method described is executed.
  • This application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in any of the preceding items is executed.
  • This application also provides a computer device, which includes:
  • One or more processors are One or more processors;
  • Memory used to store one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • Figure 8 shows an exemplary system that can be used to implement the various embodiments described in this application
  • the system 300 can be used as any device in each of the described embodiments.
  • the system 300 may include one or more computer-readable media having instructions (for example, system memory or NVM/storage device 320) and be coupled with the one or more computer-readable media and configured to execute Instructions are one or more processors (eg, processor(s) 305) that implement modules to perform the actions described in this application.
  • processors eg, processor(s) 305
  • system control module 310 may include any suitable interface controller to provide at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310 Any appropriate interface.
  • the system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315.
  • the memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • the system memory 315 may be used to load and store data and/or instructions for the system 300, for example.
  • the system memory 315 may include any suitable volatile memory, for example, a suitable DRAM.
  • the system memory 315 may include a double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • DDR4 SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 310 may include one or more input/output (I/O) controllers to provide an interface to the NVM/storage device 320 and the communication interface(s) 325.
  • I/O input/output
  • NVM/storage device 320 may be used to store data and/or instructions.
  • the NVM/storage device 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives.
  • HDD hard drives
  • CD compact disc
  • DVD digital versatile disc
  • the NVM/storage device 320 may include storage resources that are physically part of the device on which the system 300 is installed, or it may be accessed by the device without necessarily being a part of the device.
  • the NVM/storage device 320 may be accessed via the communication interface(s) 325 through the network.
  • the communication interface(s) 325 may provide an interface for the system 300 to communicate through one or more networks and/or with any other suitable devices.
  • the system 300 can wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, the memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 to form a system in package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated with the logic of one or more controllers of the system control module 310 on the same mold. For one embodiment, at least one of the processor(s) 305 may be integrated with the logic of one or more controllers of the system control module 310 on the same mold to form a system on chip (SoC).
  • SoC system on chip
  • the system 300 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (for example, a laptop computing device, a handheld computing device, a tablet computer, a netbook, etc.).
  • the system 300 may have more or fewer components and/or different architectures.
  • the system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • this application can be implemented in software and/or a combination of software and hardware.
  • it can be implemented using an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • ASIC application specific integrated circuit
  • the software program of the present application may be executed by a processor to realize the steps or functions described above.
  • the software program (including related data structure) of the present application can be stored in a computer-readable recording medium, for example, RAM memory, magnetic or optical drive or floppy disk and similar devices.
  • some steps or functions of the present application may be implemented by hardware, for example, as a circuit that cooperates with a processor to execute each step or function.
  • a part of this application can be applied as a computer program product, such as a computer program instruction, when it is executed by a computer, through the operation of the computer, the method and/or technical solution according to this application can be invoked or provided.
  • computer program instructions in computer-readable media includes but is not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by the computer includes but not Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding post-installation program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
  • Communication media includes media by which communication signals containing, for example, computer-readable instructions, data structures, program modules, or other data are transmitted from one system to another system.
  • Communication media can include conductive transmission media (such as cables and wires (for example, optical fiber, coaxial, etc.)) and wireless (unguided transmission) media that can propagate energy waves, such as sound, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules or other data may be embodied as, for example, a modulated data signal in a wireless medium such as a carrier wave or similar mechanism such as embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal whose one or more characteristics have been altered or set in such a way as to encode information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • a computer-readable storage medium may include volatile, non-volatile, nonvolatile, and nonvolatile, and may be implemented in any method or technology for storing information such as computer-readable instructions, data structures, program modules, or other data. Removable and non-removable media.
  • computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory, such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other currently known media or future developments that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM,
  • an embodiment according to the present application includes a device that includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the device triggers
  • the operation of the device is based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Abstract

本申请的目的是提供一种在视频通话过程中传递信息的方法,该方法包括:响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配;通过所述长连接将所述触控序列信息发送至所述第二终端。本申请提升了信息传递的有效性和灵活性,提升了用户的体验。

Description

一种在视频通话过程中传递信息的方法与设备
本申请是以CN申请号为 201910798097.8,申请日为 2019.08.27的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中
技术领域
本申请涉及通信领域,尤其涉及一种在视频通话过程中传递信息的技术。
背景技术
随着移动互联网的发展,人们可以通过移动设备(例如,手机、平板)进行视频通话,在视频通话过程中可以保证实时的语音信息以及图像信息的传播,方便了用户之间进行实时沟通,目前,视频通话的基本功能包括显示用户双方的视频图像、最小化视频界面、切换摄像头以及结束视频通话等。
发明内容
本申请的一个目的是提供一种在视频通话过程中传递信息的方法与设备。
根据本申请的一个方面,提供了一种在视频通话过程中传递信息的方法,应用于第一终端,该方法包括:
响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;
响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配;
通过所述长连接将所述触控序列信息发送至所述第二终端。
根据本申请的另一个方面,提供了一种在视频通话过程中传递信息的方法,应用于第二终端,该方法包括:
在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通过过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;
基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一 触控轨迹。
根据本申请的一个方面,提供了一种在视频通话过程中传递信息的方法,该方法包括:
响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,所述第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接;
响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,所述第一终端实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配,并通过所述长连接将所述触控序列信息发送至所述第二终端。
所述第二终端接收所述第一终端发送的触控序列信息,并基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
根据本申请的一个方面,提供了一种在视频通话过程中传递信息的第一终端,该设备包括:
一一模块,用于响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;
一二模块,用于响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配;
一三模块,用于通过所述长连接将所述触控序列信息发送至所述第二终端。
根据本申请的另一个方面,提供了一种在视频通话过程中传递信息的第二终端,该设备包括:
二一模块,用于在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通过过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;
二二模块,用于基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
根据本申请的一个方面,提供了一种在视频通话过程中传递信息的第一终端,其中,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行:
响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;
响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配;
通过所述长连接将所述触控序列信息发送至所述第二终端。
根据本申请的另一个方面,提供了一种在视频通话过程中传递信息的第二终端,其中,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行:
在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通过过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;
基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
根据本申请的一个方面,提供了存储指令的计算机可读介质,所述指令在被执行时使得系统进行:
响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;
响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配;
通过所述长连接将所述触控序列信息发送至所述第二终端。
根据本申请的另一个方面,提供了存储指令的计算机可读介质,所述指令在被执行时使得系统进行:
在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通过过程中,基 于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;
基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
与现有技术相比,本申请在第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接的基础上,根据第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,随后通过所述长连接将所述触控序列信息发送至所述第二终端。其中,通过长连接进行通信可以保证后续实时传输触控序列信息,本申请可以在不影响用户进行视频通话的情况下,实时传递触控轨迹信息至对方用户,同时不限制触控轨迹信息的形态,提升了信息传递的有效性和灵活性,提升了用户的体验。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1a示出根据本申请的一个场景示意图;
图1b示出根据本申请的又一个场景示意图;
图2示出根据本申请的一个实施例的一种在视频通话过程中传递信息的系统流程图;
图3示出根据本申请又一个实施例的一种在视频通话过程中传递信息的方法流程图,应用于第一终端;
图4示出根据本申请另一个实施例的一种在视频通话过程中传递信息的方法流程图,应用于第二终端;
图5示出根据本申请再一个实施例的一种在视频通话过程中传递信息的方法流程图;
图6示出根据本申请一个实施例的一种在视频通话过程中传递信息的第一终端的设备示意图;
图7示出根据本申请又一个实施例的一种在视频通话过程中传递信息的第二终端的设备示意图;
图8示出根据本申请又一个实施例的一种在视频通话过程中传递信息的系统设备的设备示意图;
图9示出可被用于实施本申请中所述的各个实施例的示例性系统。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本申请作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如,中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash Memory)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Memory,PCM)、可编程随机存取存储器(Programmable Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本申请所指设备包括但不限于用户设备、网络设备、或用户设备与网络设备通过网络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互(例如通过触摸板进行人机交互)的移动电子产品,例如智能手机、平板电脑等,所述移动电子产品可以采用任意操作系统,如Android操作系统、iOS操作系统等。其中,所述网络设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和信息处理 的电子设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、数字信号处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云;在此,云由基于云计算(Cloud Computing)的大量计算机或网络服务器构成,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个虚拟超级计算机。所述网络包括但不限于互联网、广域网、城域网、局域网、VPN网络、无线自组织网络(Ad Hoc网络)等。优选地,所述设备还可以是运行于所述用户设备、网络设备、或用户设备与网络设备、网络设备、触摸终端或网络设备与触摸终端通过网络相集成所构成的设备上的程序。
当然,本领域技术人员应能理解上述设备仅为举例,其他现有的或今后可能出现的设备如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。
在本申请的描述中,“多个”的含义是两个或者更多,除非另有明确具体的限定。
图1a示出了本申请的一个典型场景,第一用户持有第一终端,当前第一终端与第二用户持有的第二终端正保持视频通话连接,响应于第一用户在与第二用户的视频通话过程中的消息触发操作,第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接,其中,所述消息触发操作包括但不限于在视频通话界面中对预设按钮的触发操作、预定手势操作(例如,上下左右滑动等)以及视频通话过程中的语音关键词触发操作,所述长连接本质上是TCP长连接,将http协议设置connection:keep-alive,保持长连接是为了加快网络内容传递速度。在所述长连接被建立的基础上,响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,第一终端实时记录触控轨迹(例如,“爱心滑动轨迹”),并在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹,并实时生成所述触控操作对应的触控序列信息,随后第一终端将生成的触控序列信息通过所述长连接发送至所述第二终端。图1b示出了本申请的又一个典型场景,第二终端接收该触控序列信息,并基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹(例如,对应的“爱心触控轨迹”)其中,所述第一终端以及第二终端包括但不限于手机、平板等具有触控屏幕的计算设备。
图2示出根据本申请的一个实施例的一种在视频通话过程中传递信息的方法,其中,该方法包括:
响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,所述第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接;
响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,所述第一终端实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配,并通过所述长连接将所述触控序列信息发送至所述第二终端。
所述第二终端接收所述第一终端发送的触控序列信息,并基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
图3示出根据本申请的一个实施例的一种在视频通话过程中传递信息的方法,应用于第一终端,所述方法包括步骤S101、步骤S102和步骤S103。在步骤S101中,第一终端响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;在步骤S102中,第一终端响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述触控轨迹相匹配;在步骤S103中,第一终端通过所述长连接将所述触控序列信息发送至所述第二终端。
具体地,在步骤S101中,第一终端响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接。例如,所述消息触发操作包括但不限于在视频通话界面中对预设按钮的触发操作、预定手势操作(例如,上下左右触控等)以及视频通话过程中的语音关键词触发操作,第一终端将http协议设置connection:keep-alive,其中,所述长连接本质上是TCP长连接,保持长连接是为了加快网络内容传递速度。在一些实施例中,所述建立所述第一终端与所述第二用户使用的第二终端间的长连接,包括:向所述第一终端对应的服务器发送建立所述第一终端与所述第二用户使用的第二终端间长连接的指令信息,以使得所述服务器根据所述指令信息绑定第一长连接和第二长连接以建立所述第一终端与所述第二终端间的长连接,其中,所述第一长连接为所述服务器与所述第一终端之间的长连接,所述第二长连接为所述服务器与所述第二终端之间的长连接。。其中,该指令信 息不受第一终端与服务器之间的第一长连接限制,同时,服务器主动建立与第一终端以及第二终端之间的连接,提升了进行长连接的效率。例如,第一终端与第二终端之间的长连接通过中间服务器来分别构建第一长连接以及第二长连接,随后第一长连接与第二长连接进行绑定构成第一终端与第二终端之间长连接,通过中间服务器构建第一终端与第二终端之间长连接可以保证第一终端与第二终端视频通话顺利的情况下后续触控序列信息也能实时传输。在一些实施例中,所述方法还包括步骤S104(未示出),在步骤S104中,建立所述第一终端与所述服务器间的第一长连接;其中,所述向所述第一终端对应的服务器发送建立所述第一终端与所述第二用户使用的第二终端间长连接的指令信息,包括:通过所述第一长连接向所述服务器发送关于所述第二长连接的建立并绑定指令,其中,所述服务器根据所述建立并绑定指令建立所述第二终端与所述服务器间的所述第二长连接,并将所述第一长连接与所述第二长连接相绑定,以建立所述第一终端与所述第二终端间的长连接。例如,在第一终端与所述服务器间建立第一长连接的前提下,服务器接收第一终端发送的关于所述第二长连接的建立并绑定指令,在中间服务器与第一终端建立第一长连接以及与第二终端建立第二长连接的情况下,为后续第一终端与第二终端之间建立长连接提供基础。
在步骤S102中,第一终端响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述触控轨迹相匹配。其中,所述触控操作包括不限于用户在视频通话界面中的点操作、上下左右各方位的移动操作。例如,第一终端响应于第一用户在与第二用户的视频通话过程中的消息触发操作,第一终端根据第一用户在所述第一终端的视频通话界面上的触控操作实时记录所述触控操作对应的触控序列信息,随后在所述第一终端的视频通话界面上实时显示所述触控轨迹,其中,该触控序列信息绘制的轨迹与该触控轨迹基本重合。在一些实施例中,所述触控序列信息包括以下至少一项:
1)预定字符串中的路径信息;
2)预定字符串中的触控属性信息;
例如,所述触控序列信息包括预定字符串信息,所述预定字符串包括但不限于json、数组。所述路径信息包括第一终端预先设置初始点位置,随后以初始点位置作为起点坐标获取当前触控轨迹的轨迹坐标,以此作为路径信息。所述触控属性信息包括但不限于 触控轨迹的宽度、色彩以及线条粗细。所述触控序列信息为后续第二终端进行呈现触控轨迹的基础。
在一些实施例中,所述方法还包括步骤S105(未示出),在步骤S105中,第一终端获取所述第一用户在所述视频通话过程中的语音信息;在步骤S102中,第一终端响应于检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,生成所述触控操作对应的第二触控轨迹;根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹;在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并基于所述第一触控轨迹实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述触控轨迹相匹配。例如,在第一用户与第二用户视频通话过程中,若第一终端检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,生成所述触控操作对应的第二触控轨迹,同时结合第一终端获取的语音信息(例如,“圆形”、“画个圆形”、“写个圆字给你看”),第一终端根据与圆形类似的第二触控轨迹进行智能纠正,生成所述触控操作对应的第一触控轨迹(例如,矫正第二触控轨迹后生成的完整的圆形),并在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹。随后,第一终端根据该第一触控轨迹(例如,矫正第二触控轨迹后生成的完整的圆形)生成所述触控操作对应的触控序列信息。在配合用户的语音信息进行触控轨迹矫正的情况下,可以将触控轨迹更加完整形象的进行呈现,提升用户的使用体验。在一些实施例中,所述根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:提取所述语音信息中的关键词信息;根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹。在一些实施例中,所述关键词信息包括以下至少任一项:预定图形关键词(例如,圆形、正方形、曲线);预定动作关键词(例如、画、涂等);预定行为关键词(例如,画个圆形、画个正方形)。例如,第一终端根据语音信息中的关键词信息进行智能纠正,生成所述触控操作对应的触控轨迹。在配合用户的语音信息进行触控轨迹矫正的情况下,可以将触控轨迹更加完整形象的进行呈现,提升用户的使用体验。在一些实施例中,所述根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:检测所述关键词信息中的预定名词信息;根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,其中,所述预定名词信息与第一触控轨迹相匹配。例如,第一终端获取语音信息中的关键词信息,随后检测所述关键词信息中是否有 预定的名词信息(例如,“椭圆形”等),随后根据该预定名词信息矫正所述第二触控轨迹信息以生成所述触控操作对应的第一触控轨迹(例如,将当前触笔模糊的第二触控轨迹调整为“椭圆形”的第一触控轨迹)。在配合用户的语音信息进行触控轨迹矫正的情况下,可以将触控轨迹更加完整形象的进行呈现,提升用户的使用体验。在一些实施例中,所述根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:根据所述预定名词信息确定对应的轨迹信息;通过所述轨迹信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹。其中,所述预定名词信息包括形状信息(例如,“椭圆形”)、物体信息(例如,“盒子”)等。第一终端根据预定名词信息(例如,“椭圆形”)确定对应的轨迹信息(例如,椭圆形轨迹),并参照椭圆形轨迹的轨迹形状修正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹。在配合用户的语音信息进行触控轨迹矫正的情况下,可以将触控轨迹更加完整形象的进行呈现,提升用户的使用体验。
在步骤S103中,第一终端通过所述长连接将所述触控序列信息发送至所述第二终端。在第一终端与所述第二用户使用的第二终端建立长连接的情况下,可以保持通信的不中断,保证了实时传输任意时刻的触控序列信息。在一些实施例中,在步骤S103中,第一终端利用心跳检测规则获取新生成的所述触控序列信息;并通过所述长连接将新生成的所述触控序列信息发送至所述第二终端。其中,所述心跳检测规则包括第一终端每隔预定时间段阈值获取一次新生成的触控序列信息,在通过心跳检测规则实时获取触控序列信息的前提下,第一终端可以实时的将新生成的触控序列信息发送至第二终端,以便第二终端可以实时呈现触控轨迹。
例如,第一用户持有第一终端,当前第一终端与第二用户持有的第二终端正保持视频通话连接,响应于第一用户在与第二用户的视频通话过程中的对预设按钮的点击操作,第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接,响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,第一终端实时记录触控轨迹,并在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹(例如,对应的“线条触控轨迹”),并实时生成所述触控操作对应的触控序列信息,例如,所述触控序列信息以json串或者数组的形式表示(例如,{xyz:’0,0,0’,color:’#ffffff’,weight:‘1px’,path:’000,001,002,120’}),随后第一终端将生成的触控序列信息通过所述长连接发送至所述第二终端。第二终端接收该触控序列信息,并 基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹(例如,对应的“线条触控轨迹”)。
在一些实施例中,所述方法还包括步骤S106(未示出),在步骤S106中,若在预定时间阈值内未检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,第一终端在所述第一终端的视频通话界面上隐藏所述第一触控轨迹。例如,若在预定时间阈值内(例如,2s)未检测到所述第一用户在所述第一终端的视频通话界面上的触控操作(例如,用户未触摸视频通话界面的时间达到2s),第一终端确认此次触控操作结束,并设置第一触控轨迹逐渐消失直至隐藏。在这种情况下,给与用户直观的实验体验。
图4示出根据本申请的一个实施例的一种在视频通话过程中传递信息的方法,应用于第二终端,所述方法包括步骤S201和步骤S202。在步骤S201中,第二终端在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通话过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;在步骤S202中,第二终端基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹。
具体地,在步骤S201中,第二终端在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通话过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息。其中,所述长连接本质上是TCP长连接,保持长连接是为了加快网络内容传递速度,第一终端将http协议设置connection:keep-alive以建立与第二终端的长连接,随后通过所述长连接将所述触控序列信息发送至第二终端,例如,所述触控序列信息是第一终端响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,根据触控轨迹生成的与所述触控操作对应的触控序列信息。
在步骤S202中,第二终端基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹。其中,所述触控序列信息包括预定字符串中的路径信息以及预定字符串中的触控属性信息。第二终端根据触控序列信息中包括的路径以及属性信息绘制触控轨迹并呈现触控轨迹。提升了信息传递的有效性和灵活性,提升了用户的体验。
例如,第一用户持有第一终端,当前第一终端与第二用户持有的第二终端正保持视频通话连接,响应于第一用户在与第二用户的视频通话过程中的对预设按钮的点击操作,第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接,响应 于所述第一用户在所述第一终端的视频通话界面上的触控操作,第一终端实时记录触控轨迹,并在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹(例如,对应的“线条触控轨迹”),并实时生成所述触控操作对应的触控序列信息,例如,所述触控序列信息以json串或者数组的形式表示(例如,{xyz:’0,0,0’,color:’#ffffff’,weight:‘1px’,path:’000,001,002,120’}),随后第一终端将生成的触控序列信息通过所述长连接发送至所述第二终端。第二终端接收该触控序列信息,并基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹(例如,对应的“线条触控轨迹”)。
在一些实施例中,所述方法还包括步骤S203(未示出),在步骤S203中,第二终端通过所述长连接获取所述第一终端新生成的触控序列信息;通过所述长连接从所述第一终端中实时获取触控状态信息;若所述触控状态信息为消息增强模式,根据所述新生成的触控序列信息呈现对应的新生成的第一触控轨迹,所述新生成的触控序列信息与所述新生成的第一触控轨迹相匹配。例如,第一终端利用心跳检测规则获取新生成的所述触控序列信息;并通过所述长连接将新生成的所述触控序列信息发送至所述第二终端。同时,第二终端从第一终端获取触控状态信息,其中,所述触控状态信息包括消息增强模式(例如,允许用户在视频通话过程中在视频通话界面上进行触控操作以显示触控轨迹)或者退出触控状态模式。若所述触控状态信息为消息增强模式,根据所述新生成的触控序列信息呈现对应的新生成的第一触控轨迹,例如,若所述触控状态信息为消息增强模式,第二终端确认第二终端可以根据新生成的触控序列信息呈现对应的新生成的第一触控轨迹。在这种情况下,实时传递第一终端的触控轨迹信息并实时呈现所述触控轨迹在第二终端,保证了终端之间的同步性和一致性。
图5示出根据本申请的另一个实施例的一种在视频通话过程中传递信息的方法流程图,第一用户持有第一终端,当前第一终端与第二用户持有的第二终端正保持视频通话连接,响应于第一用户在与第二用户的视频通话过程中的消息触发操作(例如,编辑操作),第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接,并保持心跳查询,其中,所述第一终端与第二终端间的长连接由中间的服务器来进行搭建。在所述长连接被建立的基础上,响应于所述第一用户在所述第一终端的视频通话界面上的触控操作(例如,绘图操作),第一终端实时记录绘图轨迹(例如,“圆形绘图”),并在所述第一终端的视频通话界面上实时显示所述绘图操作对应的圆形轨迹,并实时生 成所述绘图操作对应的触控序列信息(绘图序列信息),其中,所述绘图序列信息以json串的形式保存,随后第一终端将生成的绘图序列信息通过所述长连接发送至中间的服务器。第二终端实时向服务器查询绘图状态信息,若所述绘图状态信息发生变化,第二终端获取生成的绘图序列信息,并根据该绘图序列信息进行绘图(例如,“圆形绘图”)。
图6示出根据本申请的一个实施例的一种在视频通话过程中传递信息的第一终端,所述第一终端包括一一模块101、一二模块102和一三模块103。一一模块101,用于响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;一二模块102,用于响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述触控轨迹相匹配;一三模块103,用于通过所述长连接将所述触控序列信息发送至所述第二终端。
具体地,一一模块101,用于响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接。例如,所述消息触发操作包括但不限于在视频通话界面中对预设按钮的触发操作、预定手势操作(例如,上下左右触控等)以及视频通话过程中的语音关键词触发操作,第一终端将http协议设置connection:keep-alive,其中,所述长连接本质上是TCP长连接,保持长连接是为了加快网络内容传递速度。在一些实施例中,所述建立所述第一终端与所述第二用户使用的第二终端间的长连接,包括::向所述第一终端对应的服务器发送建立所述第一终端与所述第二用户使用的第二终端间长连接的指令信息,以使得所述服务器根据所述指令信息绑定第一长连接和第二长连接以建立所述第一终端与所述第二终端间的长连接,其中,所述第一长连接为所述服务器与所述第一终端之间的长连接,所述第二长连接为所述服务器与所述第二终端之间的长连接。在此,相关所述建立所述第一终端与所述第二用户使用的第二终端间的长连接的操作与图3所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。在一些实施例中,所述第一终端还包括一四模块104(未示出),一四模块104,用于建立所述第一终端与所述服务器间的第一长连接;其中,所述向所述第一终端对应的服务器发送建立所述第一终端与所述第二用户使用的第二终端间长连接的指令信息,包括:通过所述第一长连接向所述服务器发送关于所述第二长连接的建立并绑定指令,其中,所述服务器根据所述建立并绑定指令建 立所述第二终端与所述服务器间的所述第二长连接,并将所述第一长连接与所述第二长连接相绑定,以建立所述第一终端与所述第二终端间的长连接。有关上述一四模块104的具体实现方式的示例与图3中有关步骤S104的实施例相同或相近,故不再赘述,在此以引用方式包含于此。
一二模块102,用于响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述触控轨迹相匹配。其中,所述触控操作包括不限于用户在视频通话界面中的点操作、上下左右各方位的移动操作。例如,第一终端响应于第一用户在与第二用户的视频通话过程中的消息触发操作,第一终端根据第一用户在所述第一终端的视频通话界面上的触控操作实时记录所述触控操作对应的触控序列信息,随后在所述第一终端的视频通话界面上实时显示所述触控轨迹,其中,该触控序列信息绘制的轨迹与该触控轨迹基本重合。在一些实施例中,所述触控序列信息包括以下至少一项:
1)预定字符串中的路径信息;
2)预定字符串中的触控属性信息;在此,相关触控序列信息的操作与图3所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述第一终端还包括一五模块105(未示出),一五模块105,用于获取所述第一用户在所述视频通话过程中的语音信息;一二模块102,用于响应于检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,生成所述触控操作对应的第二触控轨迹;根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹;在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并基于所述第一触控轨迹实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述触控轨迹相匹配。有关上述一五模块105的具体实现方式的示例与图3中有关步骤S105的实施例相同或相近,故不再赘述,在此以引用方式包含于此。在一些实施例中,所述根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:提取所述语音信息中的关键词信息;根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹。在此,相关所述根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹的操作与图3所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。在一些实施例中, 所述根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:检测所述关键词信息中的预定名词信息;根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,其中,所述预定名词信息与第一触控轨迹相匹配。在此,相关所述根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹的操作与图3所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。在一些实施例中,所述根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:根据所述预定名词信息确定对应的轨迹信息;通过所述轨迹信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹。在此,相关所述根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹的操作与图3所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
一三模块103,用于通过所述长连接将所述触控序列信息发送至所述第二终端。在第一终端与所述第二用户使用的第二终端建立长连接的情况下,可以保持通信的不中断,保证了实时传输任意时刻的触控序列信息。在一些实施例中,一三模块103,用于利用心跳检测规则获取新生成的所述触控序列信息;并通过所述长连接将新生成的所述触控序列信息发送至所述第二终端。在此,相关利用心跳检测规则获取新生成的所述触控序列信息的操作与图3所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在此,有关上述一一模块101、一二模块102、一三模块103的具体实现方式的示例与图3中有关步骤S101、S102、S103的实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述第一终端还包括一六模块106(未示出),一六模块106,用于若在预定时间阈值内未检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上隐藏所述第一触控轨迹。所述一六模块106的具体实现方式与前述步骤S106的实施例相同或相近,故不再赘述,在此以引用方式包含于此。
图7示出根据本申请的一个实施例的一种在视频通话过程中传递信息的第二终端,所述第二终端包括二一模块201和二二模块202。二一模块201,用于在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通过过程中,基于所述第二终端 与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;二二模块202,用于基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹。
具体地,二一模块201,用于在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通过过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息。其中,所述长连接本质上是TCP长连接,保持长连接是为了加快网络内容传递速度,第一终端将http协议设置connection:keep-alive以建立与第二终端的长连接,随后通过所述长连接将所述触控序列信息发送至第二终端,例如,所述触控序列信息是第一终端响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,根据触控轨迹生成的与所述触控操作对应的触控序列信息。
二二模块202,用于基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的触控轨迹。其中,所述触控序列信息包括预定字符串中的路径信息以及预定字符串中的触控属性信息。第二终端根据触控序列信息中包括的路径以及属性信息绘制触控轨迹并呈现触控轨迹。提升了信息传递的有效性和灵活性,提升了用户的体验。
在此,有关上述二一模块201和二二模块202的具体实现方式的示例与图4中有关步骤S201和S202的实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述第二终端还包括二三模块203(未示出),二三模块203,用于通过所述长连接获取所述第一终端新生成的触控序列信息;通过所述长连接从所述第一终端中实时获取触控状态信息;若所述触控状态信息为消息增强模式,根据所述新生成的触控序列信息呈现对应的新生成的第一触控轨迹,所述新生成的触控序列信息与所述新生成的第一触控轨迹相匹配。所述二三模块203的具体实现方式与前述步骤S203的实施例相同或相近,故不再赘述,在此以引用方式包含于此。
图7示出根据本申请的一个实施例的一种在视频通话过程中传递信息的系统设备,其中,该系统包括:
响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,所述第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接;
响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,所述第一终端实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配, 并通过所述长连接将所述触控序列信息发送至所述第二终端。
所述第二终端接收所述第一终端发送的触控序列信息,并基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
除上述各实施例介绍的方法和设备外,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储器,用于存储一个或多个计算机程序;
当所述一个或多个计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一项所述的方法。
图8示出了可被用于实施本申请中所述的各个实施例的示例性系统;
如图8所示在一些实施例中,系统300能够作为各所述实施例中的任意一个设备。在一些实施例中,系统300可包括具有指令的一个或多个计算机可读介质(例如,系统存储器或NVM/存储设备320)以及与该一个或多个计算机可读介质耦合并被配置为执行指令以实现模块从而执行本申请中所述的动作的一个或多个处理器(例如,(一个或多个)处理器305)。
对于一个实施例,系统控制模块310可包括任意适当的接口控制器,以向(一个或多个)处理器305中的至少一个和/或与系统控制模块310通信的任意适当的设备或组件提供任意适当的接口。
系统控制模块310可包括存储器控制器模块330,以向系统存储器315提供接口。存储器控制器模块330可以是硬件模块、软件模块和/或固件模块。
系统存储器315可被用于例如为系统300加载和存储数据和/或指令。对于一个实施例,系统存储器315可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,系统存储器315可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。
对于一个实施例,系统控制模块310可包括一个或多个输入/输出(I/O)控制器, 以向NVM/存储设备320及(一个或多个)通信接口325提供接口。
例如,NVM/存储设备320可被用于存储数据和/或指令。NVM/存储设备320可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。
NVM/存储设备320可包括在物理上作为系统300被安装在其上的设备的一部分的存储资源,或者其可被该设备访问而不必作为该设备的一部分。例如,NVM/存储设备320可通过网络经由(一个或多个)通信接口325进行访问。
(一个或多个)通信接口325可为系统300提供接口以通过一个或多个网络和/或与任意其他适当的设备通信。系统300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信。
对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器(例如,存储器控制器模块330)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。
在各个实施例中,系统300可以但不限于是:服务器、工作站、台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)。在各个实施例中,系统300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,系统300包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读 记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。本领域技术人员应能理解,计算机程序指令在计算机可读介质中的存在形式包括但不限于源文件、可执行文件、安装包文件等,相应地,计算机程序指令被计算机执行的方式包括但不限于:该计算机直接执行该指令,或者该计算机编译该指令后再执行对应的编译后程序,或者该计算机读取并执行该指令,或者该计算机读取并安装该指令后再执行对应的安装后程序。在此,计算机可读介质可以是可供计算机访问的任意可用的计算机可读存储介质或通信介质。
通信介质包括藉此包含例如计算机可读指令、数据结构、程序模块或其他数据的通信信号被从一个系统传送到另一系统的介质。通信介质可包括有导的传输介质(诸如电缆和线(例如,光纤、同轴等))和能传播能量波的无线(未有导的传输)介质,诸如声音、电磁、RF、微波和红外。计算机可读指令、数据结构、程序模块或其他数据可被体现为例如无线介质(诸如载波或诸如被体现为扩展频谱技术的一部分的类似机制)中的已调制数据信号。术语“已调制数据信号”指的是其一个或多个特征以在信号中编码信息的方式被更改或设定的信号。调制可以是模拟的、数字的或混合调制技术。
作为示例而非限制,计算机可读存储介质可包括以用于存储诸如计算机可读指令、数据结构、程序模块或其它数据的信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动的介质。例如,计算机可读存储介质包括,但不限于,易失性存储器,诸如随机存储器(RAM,DRAM,SRAM);以及非易失性存储器,诸如闪存、各种只读存储器(ROM,PROM,EPROM,EEPROM)、磁性和铁磁/铁电存储器(MRAM,FeRAM);以及磁性和光学存储设备(硬盘、磁带、CD、DVD);或其它现在已知的介质或今后开发的能够存储供计算机系统使用的计算机可读信息/数据。
在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术 方案。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (15)

  1. 一种在视频通话过程中传递信息的方法,应用于第一终端,其中,所述方法包括:
    响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,建立所述第一终端与所述第二用户使用的第二终端间的长连接;
    响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配;
    通过所述长连接将所述触控序列信息发送至所述第二终端。
  2. 根据权利要求1所述的方法,其中,所述通过所述长连接将所述触控序列信息发送至所述第二终端,包括:
    利用心跳检测规则获取新生成的所述触控序列信息;并通过所述长连接将新生成的所述触控序列信息发送至所述第二终端。
  3. 根据权利要求1所述的方法,其中,所述建立所述第一终端与所述第二用户使用的第二终端间的长连接,包括:
    向所述第一终端对应的服务器发送建立所述第一终端与所述第二用户使用的第二终端间长连接的指令信息,以使得所述服务器根据所述指令信息绑定第一长连接和第二长连接以建立所述第一终端与所述第二终端间的长连接,其中,所述第一长连接为所述服务器与所述第一终端之间的长连接,所述第二长连接为所述服务器与所述第二终端之间的长连接。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    建立所述第一终端与所述服务器间的第一长连接;
    其中,所述向所述第一终端对应的服务器发送建立所述第一终端与所述第二用户使用的第二终端间长连接的指令信息,包括:
    通过所述第一长连接向所述服务器发送关于所述第二长连接的建立并绑定指令,其中,所述服务器根据所述建立并绑定指令建立所述第二终端与所述服务器间的所述第二长连接,并将所述第一长连接与所述第二长连接相绑定,以建立所述第一终端与所述第二终端间的长连接。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述触控序列信息包括以下至 少一项:
    预定字符串中的路径信息;
    预定字符串中的触控属性信息。
  6. 根据权利要求1至5中任一项所述的方法,其中,所述方法还包括:
    获取所述第一用户在所述视频通话过程中的语音信息;
    所述响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配,包括:
    响应于检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,生成所述触控操作对应的第二触控轨迹;
    根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹;
    在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,并基于所述第一触控轨迹实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配。
  7. 根据权利要求6所述的方法,其中,所述根据所述语音信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:
    提取所述语音信息中的关键词信息;
    根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹。
  8. 根据权利要求7所述的方法,其中,所述根据所述关键词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:
    检测所述关键词信息中的预定名词信息;
    根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,其中,所述预定名词信息与第一触控轨迹相匹配。
  9. 根据权利要求8所述的方法,其中,所述根据所述预定名词信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨迹,包括:
    根据所述预定名词信息确定对应的轨迹信息;
    通过所述轨迹信息矫正所述第二触控轨迹以生成所述触控操作对应的第一触控轨 迹。
  10. 根据权利要求1至9中任一项所述的方法,其中,所述方法还包括:
    若在预定时间阈值内未检测到所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上隐藏所述第一触控轨迹。
  11. 一种在视频通话过程中传递信息的方法,应用于第二终端,其中,所述方法包括:
    在使用所述第二终端的第二用户与使用第一终端的第一用户的视频通话过程中,基于所述第二终端与所述第一终端间的长连接,接收所述第一终端发送的触控序列信息;
    基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
  12. 根据权利要求11所述的方法,其中,所述方法还包括:
    通过所述长连接获取所述第一终端新生成的触控序列信息;
    通过所述长连接从所述第一终端中实时获取触控状态信息;
    若所述触控状态信息为消息增强模式,根据所述新生成的触控序列信息呈现对应的新生成的第一触控轨迹,所述新生成的触控序列信息与所述新生成的第一触控轨迹相匹配。
  13. 一种在视频通话过程中传递信息的方法,其中,该方法包括:
    响应于检测到第一用户在与第二用户的视频通话过程中的消息触发操作,所述第一终端建立所述第一终端与所述第二用户使用的第二终端间的长连接;
    响应于所述第一用户在所述第一终端的视频通话界面上的触控操作,在所述第一终端的视频通话界面上实时显示所述触控操作对应的第一触控轨迹,所述第一终端实时生成所述触控操作对应的触控序列信息,所述触控序列信息与所述第一触控轨迹相匹配,并通过所述长连接将所述触控序列信息发送至所述第二终端。
    所述第二终端接收所述第一终端发送的触控序列信息,并基于所述触控序列信息,在视频通话界面上实时显示所述触控序列信息对应的第一触控轨迹。
  14. 一种在视频通话过程中传递信息的设备,其特征在于,所述设备包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如权利要求1至12中任一项所述的方法。
  15. 一种存储指令的计算机可读介质,所述指令在被执行时使得系统进行如权利要求1-12中任一项所述方法的操作。
PCT/CN2020/102254 2019-08-27 2020-07-16 一种在视频通话过程中传递信息的方法与设备 WO2021036561A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910798097.8 2019-08-27
CN201910798097.8A CN110536094A (zh) 2019-08-27 2019-08-27 一种在视频通话过程中传递信息的方法与设备

Publications (1)

Publication Number Publication Date
WO2021036561A1 true WO2021036561A1 (zh) 2021-03-04

Family

ID=68664541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102254 WO2021036561A1 (zh) 2019-08-27 2020-07-16 一种在视频通话过程中传递信息的方法与设备

Country Status (2)

Country Link
CN (1) CN110536094A (zh)
WO (1) WO2021036561A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110536094A (zh) * 2019-08-27 2019-12-03 上海盛付通电子支付服务有限公司 一种在视频通话过程中传递信息的方法与设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529934A (zh) * 2012-06-29 2014-01-22 三星电子株式会社 用于处理多个输入的方法和装置
US20160211005A1 (en) * 2008-09-12 2016-07-21 At&T Intellectual Property I, L.P. Providing sketch annotations with multimedia programs
CN106713968A (zh) * 2016-12-27 2017-05-24 北京奇虎科技有限公司 一种直播数据显示方法和装置
CN107484033A (zh) * 2017-09-15 2017-12-15 维沃移动通信有限公司 一种视频通话方法及移动终端
CN107835464A (zh) * 2017-09-28 2018-03-23 努比亚技术有限公司 视频通话窗口画面处理方法、终端和计算机可读存储介质
CN108156502A (zh) * 2018-01-05 2018-06-12 创盛视联数码科技(北京)有限公司 一种提高画笔和文字视频直播同步性的方法
CN108966031A (zh) * 2017-05-18 2018-12-07 腾讯科技(深圳)有限公司 视频会话中实现播放内容控制的方法及装置、电子设备
CN110536094A (zh) * 2019-08-27 2019-12-03 上海盛付通电子支付服务有限公司 一种在视频通话过程中传递信息的方法与设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101959050A (zh) * 2010-09-01 2011-01-26 宇龙计算机通信科技(深圳)有限公司 一种视频通话中传递信息的方法、系统及移动终端
CN103269346A (zh) * 2013-06-04 2013-08-28 温才燚 一种用于教学的远程交互系统
CN104754279B (zh) * 2013-12-30 2019-03-15 阿里巴巴集团控股有限公司 一种实现视频通话的方法及系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160211005A1 (en) * 2008-09-12 2016-07-21 At&T Intellectual Property I, L.P. Providing sketch annotations with multimedia programs
CN103529934A (zh) * 2012-06-29 2014-01-22 三星电子株式会社 用于处理多个输入的方法和装置
CN106713968A (zh) * 2016-12-27 2017-05-24 北京奇虎科技有限公司 一种直播数据显示方法和装置
CN108966031A (zh) * 2017-05-18 2018-12-07 腾讯科技(深圳)有限公司 视频会话中实现播放内容控制的方法及装置、电子设备
CN107484033A (zh) * 2017-09-15 2017-12-15 维沃移动通信有限公司 一种视频通话方法及移动终端
CN107835464A (zh) * 2017-09-28 2018-03-23 努比亚技术有限公司 视频通话窗口画面处理方法、终端和计算机可读存储介质
CN108156502A (zh) * 2018-01-05 2018-06-12 创盛视联数码科技(北京)有限公司 一种提高画笔和文字视频直播同步性的方法
CN110536094A (zh) * 2019-08-27 2019-12-03 上海盛付通电子支付服务有限公司 一种在视频通话过程中传递信息的方法与设备

Also Published As

Publication number Publication date
CN110536094A (zh) 2019-12-03

Similar Documents

Publication Publication Date Title
WO2020221159A1 (zh) 一种用于发送提醒消息的方法与设备
WO2020221189A1 (zh) 一种管理寄宿程序的方法与设备
WO2021013125A1 (zh) 一种发送会话消息的方法与设备
CN110765395B (zh) 一种用于提供小说信息的方法与设备
CN110321192B (zh) 一种呈现寄宿程序的方法与设备
WO2020221104A1 (zh) 一种呈现表情包的方法与设备
WO2020216165A1 (zh) 一种加载应用内页面标签的方法与设备
CN110333919B (zh) 一种呈现社交对象信息的方法与设备
CN110413179B (zh) 一种呈现会话消息的方法与设备
CN110321189B (zh) 一种在宿主程序中呈现寄宿程序的方法与设备
WO2021120612A1 (zh) 一种呈现消息通知的方法与设备
CN110430253B (zh) 一种提供小说更新通知信息的方法与设备
CN110515692B (zh) 一种用于启动阅读应用的方法与设备
CN110290058B (zh) 一种在应用中呈现会话消息的方法与设备
WO2022142504A1 (zh) 一种会议群组合并的方法与设备
CN111506232A (zh) 一种用于在阅读应用中控制菜单显示的方法与设备
WO2022142620A1 (zh) 一种识别二维码的方法与设备
WO2021036561A1 (zh) 一种在视频通话过程中传递信息的方法与设备
WO2016169426A1 (zh) 一种视频播放方法及装置
WO2020228510A1 (zh) 一种进行日程提醒的方法与设备
WO2021047278A1 (zh) 一种用于在社交空间发布分享信息的方法与设备
CN111325574A (zh) 一种用于提供呈现信息的方法与设备
WO2021160081A1 (zh) 一种用于社交互动的方法与设备
CN110780955A (zh) 一种用于处理表情消息的方法与设备
CN110460642B (zh) 一种管理阅读模式的方法与设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20855930

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20855930

Country of ref document: EP

Kind code of ref document: A1