CN113709577B - Video session method - Google Patents

Video session method Download PDF

Info

Publication number
CN113709577B
CN113709577B CN202010435958.9A CN202010435958A CN113709577B CN 113709577 B CN113709577 B CN 113709577B CN 202010435958 A CN202010435958 A CN 202010435958A CN 113709577 B CN113709577 B CN 113709577B
Authority
CN
China
Prior art keywords
application
video
data
mobile terminal
video session
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010435958.9A
Other languages
Chinese (zh)
Other versions
CN113709577A (en
Inventor
李斌
奚驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010435958.9A priority Critical patent/CN113709577B/en
Publication of CN113709577A publication Critical patent/CN113709577A/en
Application granted granted Critical
Publication of CN113709577B publication Critical patent/CN113709577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephone Function (AREA)

Abstract

The application relates to a video session method. The method comprises the following steps: participating in a video session through a target application; when the screen display picture of the mobile terminal comprises the application display picture of the target application, acquiring the application display picture of the target application to obtain video data; and transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session. By adopting the method, the application display picture can be shared when the video session is carried out, and the privacy safety of the user is also protected.

Description

Video session method
Technical Field
The present application relates to the field of communications technologies, and in particular, to a video session method.
Background
With the development of communication technology, video session technology, which is a technology for transmitting voice and images between different terminals based on the internet, has emerged. In the process of video session, a user usually collects video pictures through a camera, and then transmits the video pictures to other members participating in the video session, which rarely involves the situation of collecting screen pictures of a mobile terminal for sharing.
It is also necessary to collect the screen of the mobile terminal for sharing in some situations, for example, when customer service instructs the user to operate the mobile phone, it is necessary to collect the screen of the mobile terminal. The traditional mode of collecting the screen picture of the mobile terminal for sharing is generally to directly perform screen capturing collection on the whole screen of the mobile terminal, namely, the whole screen picture including the notification bar can be recorded, privacy information of a user is easy to reveal, and safety risks exist.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a video session method, apparatus, mobile terminal, and storage medium capable of improving user information security.
A video session method, the method comprising:
participating in a video session through a target application;
when the screen display picture of the mobile terminal comprises the application display picture of the target application, acquiring the application display picture of the target application to obtain video data;
and transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session.
A video session apparatus, the apparatus comprising:
the first video session module is used for participating in the video session through the target application;
the acquisition module is used for acquiring the application display picture of the target application to obtain video data when the application display picture of the target application is included in the screen display picture of the mobile terminal;
and the transmission module is used for transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session.
A mobile terminal comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
participating in a video session through a target application;
when the screen display picture of the mobile terminal comprises the application display picture of the target application, acquiring the application display picture of the target application to obtain video data;
and transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
participating in a video session through a target application;
when the screen display picture of the mobile terminal comprises the application display picture of the target application, acquiring the application display picture of the target application to obtain video data;
and transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session.
According to the video session method, the device, the mobile terminal and the storage medium, the video session is carried out through the mobile terminal, and in the video session process, if the current screen display picture of the mobile terminal comprises the application display picture of the target application, the mobile terminal can directly acquire the application display picture of the target application to obtain video data, so that the video data is transmitted to the terminal where the target member participating in the video session is located. In this way, in the process of the video session, the sharer can share the screen data in the target application with the target members participating in the video session without revealing other information of the user, thereby protecting the privacy information of the user and realizing the safe sharing of the screen data in the process of participating in the video session.
A video session method, the method further comprising:
participating in a video session through a target application;
receiving video data sent by a mobile terminal corresponding to a member participating in the video session; when the video data comprises an application display picture of the target application in a screen display picture through the mobile terminal, acquiring the application display picture of the target application;
and in the process of participating in the video session, rendering and displaying video frames in the video data frame by frame.
A video session apparatus, the apparatus comprising:
the second video session module is used for participating in the video session through the target application;
the receiving module is used for receiving video data sent by the mobile terminal corresponding to the member participating in the video session; when the video data comprises an application display picture of the target application in a screen display picture through the mobile terminal, acquiring the application display picture of the target application;
and the display module is used for rendering and displaying the video frames in the video data frame by frame in the process of participating in the video session.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Participating in a video session through a target application;
receiving video data sent by a mobile terminal corresponding to a member participating in the video session; when the video data comprises an application display picture of the target application in a screen display picture through the mobile terminal, acquiring the application display picture of the target application;
and in the process of participating in the video session, rendering and displaying video frames in the video data frame by frame.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
participating in a video session through a target application;
receiving mobile terminals corresponding to members participating in the video session to send video data; when the video data comprises an application display picture of the target application in a screen display picture through the mobile terminal, acquiring the application display picture of the target application;
and in the process of participating in the video session, rendering and displaying video frames in the video data frame by frame.
The video session method, the video session device, the computer equipment and the storage medium participate in the video session through the target application, receive video data sent by the mobile terminal where other members participating in the video session are located, and render and display video frames in the received video data frame by frame. The video data are obtained by acquiring an application display picture of a target application when the mobile terminal comprises the application display picture of the target application in a screen display picture. That is, the video data does not include information of a notification bar or other application programs, and only includes screen data in the target application, so that other information of the sharer cannot be revealed, privacy information of the user is protected, and the sharer can safely share the screen data in the process of participating in the video session.
Drawings
FIG. 1 is an application environment diagram of a video session method in one embodiment;
FIG. 2 is a flow diagram of a video session method in one embodiment;
fig. 3 (a) is a screen display of a mobile terminal corresponding to a sharer during a video session in an embodiment;
fig. 3 (B) is an application display screen acquired by a mobile terminal corresponding to a sharer during a video session in an embodiment;
FIG. 4 is an exemplary diagram of an application display with tag information added in one embodiment;
FIG. 5 is a flow chart of a video session method according to another embodiment;
FIG. 6 is a schematic diagram of an application of a video session method in one embodiment;
FIG. 7 is a block diagram of a video session apparatus in one embodiment;
FIG. 8 is a block diagram of a video session apparatus in another embodiment;
FIG. 9 is a block diagram of a video session apparatus in yet another embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The video session method provided by the application can be applied to an application environment shown in fig. 1. Wherein the first terminal 110 communicates with the server 120 through a network. The second terminal 130 communicates with the server 120 through a network. The first terminal 110 is a mobile terminal, and may be, but not limited to, various smart phones, tablet computers, portable wearable devices, and the like. The second terminal 130 may be a mobile terminal or a desktop terminal, such as, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server 120 may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
The first terminal 110 participates in the video session through the target application, and when the current screen display picture of the first terminal 110 includes the application display picture of the target application, the application display picture of the target application is collected to obtain video data. The first terminal 110 sends the video data to the server 120, the server 120 forwards the video data to the second terminal 130 corresponding to the target member participating in the video session, and the second terminal 130 renders and displays the video frames in the received video data frame by frame. It may be appreciated that the number of the second terminals may be one or more, which is not limited in the embodiment of the present application. It can be understood that the first terminal may also be used as a terminal corresponding to the recipient to receive video data in other application scenarios; and when the second terminal is specifically a mobile terminal, the second terminal may also be used as a terminal corresponding to the sharing party to send video data, which is not limited in the embodiment of the present application.
In one embodiment, as shown in fig. 2, a video session method is provided, and the method is applied to a mobile terminal, which may specifically be the first terminal 110 in fig. 1, for example. The video session method comprises the following steps:
step S202, participating in the video session through the target application.
The target application is an application program capable of providing video session functions, such as an instant messaging application, a session application, a live broadcast application or a video application. The video session is a manner of performing video interaction through at least two user identifications, and specifically may be a video call or a video conference. The members participating in the video session can display the video data collected by the terminals of other members through the target application, and can transmit the locally collected video data to the terminals corresponding to the other members, so that the video session between at least two users can be realized.
It should be noted that the video sessions may be classified into two-person video sessions and multi-person video sessions according to the number of participating user identities. The video session engaged by only two user identifications is a two-person video session and the video session engaged by more than two user identifications is a multi-person video session. The user identifier is used for uniquely identifying a user member, and can be a character string comprising at least one character of numbers, letters and symbols, and specifically can be a user account number or a user mobile phone number and the like.
Specifically, a target application is operated on the mobile terminal, the mobile terminal can display a video session entry through the target application, and when a triggering operation acting on the video session entry is detected, the mobile terminal can join in a corresponding video session through a user identifier of the current login target application. The triggering operation can be specifically touch operation, cursor operation, key operation or voice operation. The touch operation can be a touch click operation, a touch press operation or a touch slide operation, and the touch operation can be a single-point touch operation or a multi-point touch operation; the cursor operation may be an operation of controlling the cursor to click or an operation of controlling the cursor to press; the key operation may be a virtual key operation or a physical key operation, etc.
In one embodiment, a target application is running on the mobile terminal, and the first user can log in the target application on the mobile terminal through a corresponding user account. It will be appreciated that the first user mentioned in the embodiments of the present application is the user using the mobile terminal. The first user may conduct a session with the second user through the target application. The second user is i.e. the user using the second terminal. When the mobile terminal receives the video session invitation instruction sent by the second terminal, the corresponding video session entry can be displayed. When the first user clicks the video session entry, the mobile terminal may establish a video session link with the second terminal, through which the first user and the second user may conduct a video session.
Taking the first user as a customer service person for illustration, the customer service person can invite the first user to join the video session. Specifically, the first user may log in to the target application using the mobile terminal, and the customer service person may log in to the target application using the computer device. The customer service person may send a link address for joining the video session, i.e. a portal for entering the video session, to the mobile terminal where the first user is located via the target application. When the first user clicks the link address, the mobile terminal can be triggered to establish a video session link with the computer equipment, so that the video session between the first user and customer service personnel is realized.
In one embodiment, a target application is running on the mobile terminal, and the first user can log in the target application on the mobile terminal through a corresponding user account. The user account can be in different groups, and the mobile terminal can receive a video session invitation instruction triggered by a group member of the group in which the user account is located, so that the video session invitation instruction is added into the video session of the group. The video session may in particular be an online video conference.
In one embodiment, after the first user logs in the target application with the user account, a video session invitation instruction can be initiated to a second terminal where other users are located, and when a connection response responding to the video session invitation instruction is received, a video session link is established. That is, the mobile terminal may specifically be an initiator of the video session or a receiver of the video session, which is not limited in the embodiment of the present application.
In one embodiment, there may be multiple terminals in the video session that are performing real-time video sharing; or only one main sharing device performs video sharing, and other terminals participating in the video session only receive video data and play video.
In one embodiment, the mobile terminal may specifically participate as a primary sharing device in the process of participating in the video session, that is, the first user corresponding to the mobile terminal is a talkback user, which may also be referred to as a hosting user. Other users participating in the video session are participants and can view the application display screen shared by the mobile terminal.
In another embodiment, the primary sharing device of the video session is not the mobile terminal, which is participating as a participant in and watching the video shared by the primary sharing device. When the mobile terminal detects the video sharing instruction, the mobile terminal can trigger the sharing of the application picture in the video session. The video sharing instruction is an instruction for triggering the mobile terminal to acquire an application display picture and share, and the video sharing instruction specifically may be triggered by a user through the mobile terminal, may be triggered by a main sharing device, may be triggered by a server, or the like, which is not limited in the embodiment of the present application. For a specific implementation of this part, reference is made to the detailed description in the following embodiments.
In one embodiment, step S202, i.e. the step of entering a video session through the target application, specifically includes: acquiring a video session invitation instruction through a target application; responding to the video session invitation instruction, and displaying a response option through the target application; the video session is entered when a triggering operation of a response option for triggering a connection response occurs.
Specifically, the mobile terminal may receive a video session invitation instruction sent by the second terminal. The second terminal may specifically be a terminal corresponding to a contact related to the currently logged-in user account. It will be appreciated that when the second terminal initiates a voice call invitation to a mobile terminal corresponding to a contact or group member, the mobile terminal may receive a voice call invitation instruction from the second terminal.
Further, when receiving the video session invitation instruction, the mobile terminal can display a response option on the current display page. The response options may include a response option triggering a connection response and a response option triggering a rejection response. The mobile terminal may detect a trigger operation for the response option, and when the trigger operation for the response option triggering the connection response occurs, the mobile terminal may transmit a connection response for establishing a video session link with the second terminal to the second terminal. The mobile terminal may establish a video session link according to the respective user identities of the mobile terminal and the second terminal. A first user holding a mobile terminal and a second user holding a second terminal may conduct a video session over the video session link.
In the above embodiment, in response to the video session invitation instruction, the response option is presented by the target application, and when a triggering operation for triggering the response option for triggering the connection response occurs, the video session can be quickly entered. In this way, video session invitations initiated by other users can be conveniently received.
In step S204, when the screen display of the mobile terminal includes the application display of the target application, the application display of the target application is acquired to obtain video data.
The screen display picture of the mobile terminal is the picture content currently presented by the display screen of the mobile terminal. The application display screen is a screen which is presented by the page data provided by the target application after rendering, and specifically can be a screen of an application page which is presented by the target application in the foreground operation. Specifically, when the mobile terminal successfully participates in the video session, the mobile terminal may detect whether the current screen display includes an application display of the target application, that is, whether the target application is running in a foreground display mode, and if so, the mobile terminal may call the system interface to collect the current display of the application display, and form the application display collected in real time into corresponding video data.
It should be noted that, when the current screen display of the mobile terminal includes the application display of the target application, specifically, a single application page of the target application may be displayed in the screen, or different application pages of the target application may be displayed in the screen in a switching manner, so long as the pages belonging to the target application are displayed in the screen.
In one embodiment, during the video session, when the application display screen belonging to the target application is displayed on the screen of the mobile terminal, the mobile terminal may collect the screen data in the target application in real time at a certain frequency, that is, collect the currently displayed application display screen belonging to the target application. And the mobile terminal can further form the acquired multi-frame application display picture into video data according to the acquired time sequence.
In one embodiment, the current screen display of the mobile terminal includes an application display of the target application, which may specifically be that the application display of the target application is displayed on the entire display screen of the mobile terminal; it may also be a partial area of the application display screen of the target application displayed in the display screen of the mobile terminal. The application display screen of the target application is displayed on the whole display screen of the mobile terminal, for example, the target application runs in the foreground of the mobile terminal, and the application display screen of the target application occupies the whole display screen to be displayed, or the application display screen occupies other areas except the fixed area at the top of the screen to be displayed. The application display screen of the target application is displayed in a partial area of the display screen of the mobile terminal, for example, the application display screen of the target application is displayed side by side or overlapped with the display screens of other application programs, and for example, the target application and some other application programs are displayed in a split screen in the display screen of the mobile terminal.
In one embodiment, when it is detected that the application display screen of the mobile terminal does not include the application display screen of the target application, the operation of collecting the application display screen is stopped. Specifically, when the mobile terminal detects that the current screen display does not include the application display of the target application, that is, the target application exits the operation or the target application is shifted to the background operation of the mobile terminal, the mobile terminal may stop the operation of collecting the application display.
In one embodiment, when the mobile terminal stops the acquisition operation of the application display screen, the mobile terminal may trigger a prompt action, such as displaying a popup window and displaying characters such as "screen acquisition operation cannot be performed currently" to prompt the user to call up the target application to run in the foreground. When the mobile terminal stops screen acquisition, the display interface of the second terminal may remain displayed on the application display screen of the last frame, or may display a preset image, such as a full black image or a full white image, which is not limited in the embodiment of the present application.
Therefore, when the target application is not operated in the foreground, the mobile terminal does not collect the current screen data, so that the privacy of a sharer can be well protected, and other information except the content of the target application is not transmitted to other terminals.
In one embodiment, when the first terminal joins the video session, the mobile terminal may invoke the front-facing camera to collect images and share the images collected by the front-facing camera. When the mobile terminal detects a screen acquisition instruction, whether an application display picture of a target application is included in a screen display picture of a current screen can be detected. When the current screen display picture of the mobile terminal comprises an application display picture of the target application, the mobile terminal collects the application display picture of the target application and obtains video data according to the collected application display picture.
In one embodiment, the mobile terminal may expose screen capture controls through the target application. The screen capture control may be specifically displayed as an icon, text, button, or the like. When detecting the triggering operation acting on the screen acquisition control, the mobile terminal can detect whether the current screen display comprises the application display of the target application. When included, the mobile terminal may then collect screen data corresponding to the application display.
In one embodiment, when the current screen display of the mobile terminal includes the application display of the target application, the mobile terminal may perform a screen recording operation on the application display of the target application to obtain the corresponding video data. And stopping the screen recording operation when detecting that the screen display picture of the mobile terminal does not comprise the application display picture of the target application.
Referring to fig. 3 (a) and fig. 3 (B), fig. 3 (a) is a screen display of a mobile terminal corresponding to a sharer in a process of participating in a video session in one embodiment. Fig. 3 (B) is an application display screen acquired by a mobile terminal corresponding to a sharer during a video session in an embodiment. As shown in fig. 3 (a), the display screen of the mobile terminal displays notification information 302 of a notification bar in addition to an application display screen 301 of the target application. When the mobile terminal is about to collect screen data, the mobile terminal only collects the screen data in the target application. Referring to fig. 3 (B), the mobile terminal collects only the application display screen 301 corresponding to the target application. In this way, the application display screen acquired in real time is shared as video data to the terminal where other target members participating in the video session are located. When receiving the transmitted video data, the terminal of the target member can display each application display picture in the video data frame by frame. The display effect is as shown in fig. 3 (B).
Step S206, transmitting the video data to the terminal corresponding to the target member participating in the video session to realize the sharing of the application display picture of the target application in the process of participating in the video session.
Specifically, the mobile terminal may compose the acquired multi-frame application display frame into video data and then transmit the video data. In the video session process, when the screen display picture of the mobile terminal comprises the application display picture of the target application, the mobile terminal can acquire the currently displayed application display picture belonging to the target application in real time according to the preset frequency, and the acquired multi-frame application display picture forms video data. Furthermore, during the process of participating in the video session, the mobile terminal may share the collected video data to the target members participating in the video session.
In one embodiment, the mobile terminal may determine a member identification of a target member participating in the video session and send the video data and the member identification of the target member to the server. After receiving the video data and the member identifications, the server forwards the video data to the terminals corresponding to the member identifications. Wherein the target member may specifically be a member of a user participating in the video session other than the first user. The member identification is used to uniquely identify each target member, and may specifically be an letter, number, or character string, etc.
In one embodiment, after the mobile terminal establishes a video session link with the second terminal, the mobile terminal may transmit video data to the corresponding second terminal over the video session link.
In one embodiment, the mobile terminal may upload video data directly to the server to forward the video data through the server to the respective second terminals participating in the video session. The mobile terminal can also transmit the video data after encoding and compressing, so that network resources occupied in the video transmission process can be reduced, and the transmission efficiency can be improved.
According to the video session method, the mobile terminal is used for carrying out the video session, and in the video session process, if the current screen display picture of the mobile terminal comprises the application display picture of the target application, the mobile terminal can directly acquire the application display picture of the target application to obtain video data, so that the video data is transmitted to the terminal where the target member participating in the video session is located. In this way, in the process of the video session, the sharer can share the screen data in the target application with the target members participating in the video session without revealing other information of the user, thereby protecting the privacy information of the user and realizing the safe sharing of the screen data in the process of participating in the video session.
In one embodiment, step S204, that is, when the screen display of the mobile terminal includes the application display of the target application, acquiring the application display of the target application to obtain the video data includes: when an application display picture of a target application is included in a screen display picture of the mobile terminal, acquiring application parameters corresponding to the target application; and according to the application parameters, acquiring an application display picture of the target application by calling a screen acquisition interface through the target application to obtain video data.
Specifically, when an application display screen of a target application is included in a screen display screen of the mobile terminal, the mobile terminal may acquire an application parameter corresponding to the target application. And when the screen acquisition instruction is called, only acquiring an application display picture of the target application according to the application parameters. The mobile terminal can jointly form video data by using multi-frame application display pictures acquired in real time. The application parameter is a parameter corresponding to the target application, and specifically may be at least one parameter of an application identifier of the target application, a page identifier of an application page application display screen currently displayed by the target application, page data of the application page, a display position of the application page in a screen, and the like.
In one embodiment, when the screen display of the mobile terminal includes an application display of a target application, that is, the target application runs in the foreground of the mobile terminal, the mobile terminal may call the screen acquisition interface, locate the currently running target application according to the application identifier in the application parameter, and determine the display position, such as the position of the upper left corner and the lower right corner, of the application page in the target application in the display screen. Further, the mobile terminal may convert the view of the application page (i.e., the page view of the application page belonging to the target application) into an image according to the display position of the application page in the display screen. Namely, acquiring an application display picture of the target application through a screen acquisition interface.
In one embodiment, the screen capture interface may be specifically a view capture interface. When the application page of the target application is displayed in the display screen of the mobile terminal, the mobile terminal can position the display area of the application page in the display screen, and the position coordinates corresponding to the display area are the application parameters. Wherein the display area may be determined by the four tops and bottoms of the application page. And the mobile terminal can call the view acquisition interface through the source code of the target application, and convert the view of the application page into an image according to the display area corresponding to the application page, so as to obtain a corresponding application display picture. When the application page is not displayed on the screen, the mobile terminal cannot acquire the corresponding application parameters, and then the view acquisition interface cannot acquire the view page, and the mobile terminal stops acquisition of the application display picture.
In one embodiment, the screen capture interface may be an image capture interface, which may capture images of the entire display screen, or may capture images of only a portion of the display screen. When the application page of the target application is displayed in the display screen of the mobile terminal, the mobile terminal can position the display area of the application page in the display screen, and the position coordinates corresponding to the display area are the application parameters. Furthermore, the mobile terminal can call the image acquisition interface through the source code of the target application, acquire the image of the display area corresponding to the position coordinates, and obtain a corresponding application display picture. When the application page is not displayed on the screen, the mobile terminal cannot acquire effective application parameters, and then the image acquisition interface cannot acquire images.
In one embodiment, in order to ensure the information security of the sharer in the video session, the mobile terminal invokes the system interface to collect screen data in the target application only when the application display screen of the target application is displayed on the display screen of the mobile terminal, that is, the application display screen of the target application currently displayed. The screen capture interface may also vary from operating system to operating system. For example, for ios system, the mobile terminal may call the API (Application Programming Interface, application program interface) of drawViewHierarachyInRect to collect application display; for an android system, the mobile terminal may call the MediaProjectionManager API to capture the application display.
In the above embodiment, by calling the screen acquisition interface and according to the application parameters corresponding to the target application, only the application display screen of the currently displayed target application can be acquired, so that other information displayed on the mobile terminal can be prevented from being acquired, the privacy of the user can be well protected when the video session is performed, and the security of the video session is improved.
In one embodiment, step S204, that is, when the screen display of the mobile terminal includes the application display of the target application, acquiring the application display of the target application to obtain the video data includes: when the screen display picture of the mobile terminal comprises the application display picture of the target application, determining an application page corresponding to the currently displayed application display picture; when the application page is a target function page, acquiring an application display picture of the target application through the target application calling screen acquisition interface according to page parameters of the application page to obtain video data.
The page parameter is a parameter related to the application page, and may specifically be a parameter such as a page identifier of the application page, a page category to which the application page belongs, and a position coordinate of a display area of the application page. Specifically, when the screen display screen of the mobile terminal includes the application display screen of the target application, that is, the application page of the target application is displayed on the display screen of the mobile terminal, the mobile terminal may determine the application page corresponding to the currently displayed application display screen. The mobile terminal can determine whether the application page is a sharable target function page, if so, the page parameters of the application page are transmitted when the screen acquisition interface is called, so as to acquire an application display picture corresponding to the application page.
In one embodiment, the mobile terminal may determine a sharable set of pages, which may specifically be a set of one or more designated application pages, or a set of application pages of a certain or certain categories corresponding to the target function. The application page corresponding to a certain target function can be specifically an application page for realizing the target function, for example, an application page corresponding to a micro document sub-module can provide various functions of editing and/or viewing the micro document; a class of application pages corresponding to the session sub-module can provide session interaction functions; or an application page corresponding to the schedule sub-module, a schedule management function or the like may be provided.
In one embodiment, the page data of the application pages corresponding to different target functions may include corresponding page categories, where the page categories may distinguish between the application pages corresponding to different target functions. When the mobile terminal determines an application page currently displayed in the foreground, a page category corresponding to the application page may be determined. When the page category is an application category to which the preset sharable application page belongs, the mobile terminal can call a corresponding screen acquisition interface to acquire an application display picture corresponding to the application page through a target application according to the display coordinate corresponding to the application page. Otherwise, the mobile terminal cannot acquire the currently displayed application display picture.
In the above embodiment, only when the screen display screen of the mobile terminal includes the application display screen corresponding to the target function page, the application display screen is collected, so that only the application display screen capable of being shared can be shared when the video session is performed, the situation that other privacy information except the target function page is mistakenly shared when the mobile terminal performs screen sharing is avoided, and the security of the video session is greatly improved.
In one embodiment, step S204, that is, when the screen display of the mobile terminal includes the application display of the target application, is a step of acquiring the application display of the target application to obtain video data, and specifically includes: when an application display screen of the mobile terminal comprises an application display screen of the target application, acquiring the application display screen of the target application; adding marking information in the acquired application display picture to generate a corresponding video frame; and determining video data corresponding to the target application according to the video frames generated by the display pictures of the applications.
Specifically, when the screen display of the mobile terminal includes an application display of the target application, the mobile terminal may collect the application display of the target application and add the tag information to the collected application display to generate the corresponding video frame. The mobile terminal may construct the video frame to which the tag information is added into video data corresponding to the target application according to the acquisition timing. The marking information is information for marking the application display screen, and specifically may be at least one of a user identifier of a current login target application, a transmission frame rate of the application display screen, a coding rate of video data, an image resolution of the application display screen, current network information, and the like.
In one embodiment, when the video session is applied in a specific application scenario where customer service directs a user to perform a service operation, in order to facilitate the problem of remote customer service positioning, the mobile terminal may watermark a user identifier and some basic video parameters into an application display. The video parameters comprise at least one of frame rate, coding rate, image resolution of an application display picture and the like.
It can be appreciated that in other application scenarios, the marking information may also be other information set by the user, for example, the user edits an input text or icon, which is not limited in this embodiment of the present application.
Referring to fig. 4, fig. 4 is an exemplary schematic diagram of an application display screen to which tag information is added in one embodiment. As shown in fig. 4, after the mobile terminal collects an application display screen of a target application, tag information 401 may be added to the application display screen. Illustratively, the marking information 401 includes: user identity (vid: 1688 xxxxxxxx), frame rate (fps: 10), current code rate (birrtrate: 500), image resolution (res: 1080 x 1920).
In the above embodiment, when the application display screen of the target application is acquired, the marker information may be added to the application display screen, so that the finally obtained video data includes the marker information. In this way, the receiving end will display the marking data when rendering and displaying the video data, and more information can be conveyed through the marking information in the process of video session.
In one embodiment, before transmitting the video data to the terminal corresponding to the target member participating in the video session, the method further comprises: and carrying out coding processing on the video data to obtain corresponding coded data. Step S206, namely, the step of transmitting the video data to the terminal corresponding to the target member participating in the video session specifically includes: and transmitting the coded data to a terminal corresponding to the target member participating in the video session.
In one embodiment, the mobile terminal may perform encoding processing on the collected video data to obtain encoded data, and further transmit the encoded data to a terminal corresponding to a target member participating in the video session. The video data is encoded, and the original video format file is converted into another video format file mainly by compression technology. The coding technology adopted by the mobile terminal in the coding process may be specifically a coding standard based on h.265 (h.265-High Efficiency Video Coding, high-efficiency video coding standard) or h.264, which is not limited in this embodiment of the present application.
In one embodiment, the mobile terminal may obtain configuration parameters issued by the server, including a coding rate and a frame rate. The mobile terminal can acquire the video data according to the acquired frame rate and the application display picture of the target application. When the mobile terminal encodes the video frame, the encoder can compress and encode the video data into a smaller binary bit stream according to the encoding standard, such as H.265 technology, and according to the encoding frame rate in the configuration parameters, so as to obtain the encoded data. When the encoding process is performed, the encoding resolution is consistent with the resolution of the actually acquired display image.
In one embodiment, after receiving the encoded data, the second terminal corresponding to the target member participating in the video session decodes the encoded data by adopting a corresponding decoding manner to obtain uncompressed video data, so as to render and display the application display frames in the video data frame by frame.
In one embodiment, the mobile terminal may collect the application display in YUV format when collecting the application display of the target application. Wherein YUV is a color coding mode, and "Y" represents brightness (luminence or Luma), that is, gray value; "U" and "V" denote Chroma (Chroma) to describe the image color and saturation for the color of the given pixel. When the mobile terminal collects the screen data corresponding to the application display picture of the target application, the mobile terminal can be specifically converted into the display picture according to the format of YUV 420. And the second terminal can obtain the corresponding uncompressed image in YUV420 format when decoding the display picture by the decoder. And in the transmission process of the application display picture in YUV format, less bandwidth can be occupied, and the transmission efficiency is improved. It can be understood that the mobile terminal may store the acquired application display images in other color coding modes, such as RGB (red green blue) format, which is not limited in the embodiment of the present application.
In one embodiment, the second terminal may invoke an image rendering interface of the system after receiving the video data to render the uncompressed application display onto a screen of the second terminal for display.
In the above embodiment, the transmission of the video data after encoding can reduce the network resources occupied by the video data in the transmission process, and the transmission efficiency can be greatly improved under the condition of limited network bandwidth.
In one embodiment, the step of encoding video data to obtain corresponding encoded data specifically includes: encoding the video data to obtain more than one encoding packet; obtaining redundancy rate corresponding to the current moment, and performing redundancy processing on more than one coding packet according to the obtained redundancy rate to obtain a corresponding redundancy packet; and forming the coded data by the coded packets and the corresponding redundant packets in sequence.
Specifically, the mobile terminal may perform encoding processing on each video frame in the video data to obtain more than one encoded packet. And then, performing redundancy processing on more than one coding packet according to the current redundancy rate to obtain a corresponding redundancy packet. The mobile terminal may transmit the encoded packets and the redundant packets to the second terminal participating in the video session in accordance with the timing of the encoded packets and the redundant packets.
It will be appreciated that packet loss often occurs during data transmission, and in order to prevent packet loss on the network, the mobile terminal may introduce FEC (forward error correction) anti-packet loss of the code stream when encoding video data. Specifically, redundancy may be performed based on the data frame of the entire packet, for example, RS (Reed-solomon codes) encoding is performed to obtain a redundancy packet, and then the redundancy packet and the encoded packet of the video frame are transmitted together. It can be appreciated that the higher the redundancy rate, the stronger the packet loss recovery capability, but the rate of transmission is additionally increased. It can be appreciated that the mobile terminal may also use other coding modes for coding and redundancy processing, which is not limited in the embodiments of the present application.
In the above embodiment, the encoding packet is obtained by encoding the video data, and then the redundancy processing is performed on the encoding packet to obtain the redundancy packet, and the encoding packet and the redundancy packet are jointly used as the encoding data to be transmitted to the terminal where the member participating in the video session is located, so that the transmission of the valid data as comprehensively as possible in the network transmission process can be ensured, and the terminal where the member participating in the video session is located can successfully decode the encoding data to obtain the video data.
In one embodiment, the video session method further includes a step of adjusting redundancy rate, the step specifically including: acquiring network transmission information and packet loss rate when transmitting coded data; the redundancy rate is adjusted according to at least one of network transmission information and packet loss rate, and the processing is performed according to the adjusted redundancy rate when the redundancy processing is performed on the subsequent encoded packet.
Specifically, when the mobile terminal transmits the coded data, the current network transmission information and the packet loss rate can be obtained in real time. The network transmission information can be information such as network speed, broadband and the like. The packet loss rate is calculated by the server or the second terminal based on the first data of the data packet actually received and the second data of the data packet actually transmitted by the mobile terminal. For example, the packet loss rate may be calculated according to a ratio of the first data to the second data. Further, the mobile terminal may adjust a current redundancy rate according to at least one of network transmission information and a packet loss rate, and process the subsequent encoded packet according to the adjusted redundancy rate when performing redundancy processing.
In one embodiment, the number of redundancy packets that are specifically added by the mobile terminal during the addition of redundancy packets is dynamically adjustable. For example, when the encoding process is started, the mobile terminal may add fewer redundant packets, and when the network fluctuation is detected later or the packet loss rate is detected later to be higher, the redundancy rate is adjusted again to increase the number of redundant packets.
In one embodiment, when the network transmission information indicates that the current network transmission is unstable, the mobile terminal may increase the redundancy rate to increase the corresponding redundancy packet, so that the guarantee server or the second terminal may decode the video data based on the received data packet. When the network transmission information indicates that the current network transmission is stable, that is, most of the transmitted data packets can be received by the server, the mobile terminal can reduce the redundancy rate, so as to reduce the corresponding redundancy packets, reduce the network transmission burden and improve the transmission efficiency.
In one embodiment, when the packet loss rate is greater than or equal to a preset threshold, it is indicated that the packet loss of the current data packet is serious, and then the mobile terminal can increase the redundancy rate to increase the corresponding redundancy packet on the premise of allowing network broadband, so that the server or the second terminal can decode video data based on the received data packet. When the packet loss rate is smaller than the threshold value, that is, most of the transmitted data packets can be received by the server, the mobile terminal can reduce the redundancy rate, so that the corresponding redundancy packets are reduced, the network transmission load is reduced, and the transmission efficiency is improved.
In the above embodiment, when transmitting the encoded data, the current redundancy rate is dynamically adjusted according to the current network transmission information and the packet loss rate, so as to increase or decrease the number of redundancy packets, and ensure that the server or the second terminal can quickly receive the effective data packets, so as to realize successful decoding of the video data.
In one embodiment, composing the encoded packets and the corresponding redundant packets in order into encoded data specifically includes: corresponding data packet numbers are sequentially allocated to the coded packets and the corresponding redundant packets; and forming coded data by the coded packets and the redundant packets according to corresponding data packet numbers.
Specifically, when the mobile terminal encodes each frame of video frame to obtain a corresponding encoded packet, a corresponding data packet number may be allocated to the encoded packet. Similarly, a corresponding packet number is added to a redundant packet obtained by performing redundancy processing on the encoded packet. The mobile terminal can obtain the data packet number of each data packet according to the generation time sequence corresponding to each data packet and the number from small to large. Thus, the coded packets and the redundant packets form coded data according to corresponding data packet numbers, so that whether the data packets are lost in the transmission process or not can be conveniently identified.
In one embodiment, the video session method further includes a step of retransmitting the data packet, and the step specifically includes: receiving a retransmission instruction; the retransmission instruction carries the data packet number corresponding to the missing data packet; retransmitting a data packet corresponding to the data packet number carried in the retransmission instruction in response to the retransmission instruction; the data includes at least one of a coded packet and a redundant packet.
In one embodiment, the server receives a data packet sent by the mobile terminal, and determines, according to a data packet number corresponding to the received data packet, a data packet number corresponding to the lost data packet. Further, the server may send a retransmission command to the mobile terminal, and the mobile terminal will retransmit the lost data packet when receiving the retransmission command.
For example, the mobile terminal transmits data packet 1, data packet 2, and data packet 3 to the server. And the server only receives data packet 1 and data packet 3. And the data packet 2 sequenced before the data packet 3 is not received, if the server has not received the data packet 2 within a preset period of time, it may be determined that the lost data packet is the data packet 2. The server may initiate a retransmission command to the mobile terminal, which may then retransmit the data packet 2 to the server. If the data packet is lost in the retransmission process, the data packet is not retransmitted, and the second terminal can successfully decode based on a redundancy mechanism when the number of the lost data packet is within a controllable range.
It will be appreciated that the retransmission mechanism may also be followed when the server sends the data packet to the second terminal. When the second terminal finds that the data packet is not received, a retransmission instruction can be sent to the server, and the server retransmits the lost data packet based on the retransmission instruction. This allows as many data packets as possible to be transmitted to the second terminal so that the second terminal can decode the video data based on the data packets efficiently.
In the above embodiment, the data packet lost in the transmission process can be retransmitted by the retransmission mechanism, so that as many data packets as possible can be transmitted to the receiving end, so that the receiving end can effectively decode the data packet to obtain video data.
In one embodiment, before step S204, that is, before the application display screen of the target application is included in the screen display screen of the mobile terminal, the video session method further includes a screen detection step, where the step specifically includes: receiving main video data transmitted by a main sharing device participating in a video session, and playing the main video data in a display screen of the mobile terminal; and responding to the video sharing instruction, and detecting whether an application display picture of the target application is included in the current screen display picture of the mobile terminal.
Specifically, when the mobile terminal participates in the video session through the target application, the mobile terminal is not the primary sharing device of the video session currently, that is, the mobile terminal may receive and play the primary video data shared by the primary sharing device. In the process of video sharing by the main sharing device, the mobile terminal can acquire a video sharing instruction, and respond to the video sharing instruction to detect whether the current screen display picture of the mobile terminal comprises the application display picture of the target application, and if so, the application display picture can be acquired and shared to other participants of the video session.
In one embodiment, when the mobile terminal collects the application display, the mobile terminal may replace the primary sharing device to share video, that is, other terminals participating in the video session may only receive the application display shared by the mobile terminal. Or the mobile terminal and the main sharing device respectively share the videos, that is, other terminals participating in the video session can receive the videos shared by the main sharing device and the mobile terminal respectively.
In one embodiment, a video sharing control is displayed in a target application, and when a trigger operation acting on the video sharing control is detected, the mobile terminal can generate a corresponding video sharing instruction. The triggering operation can be specifically a key operation, a touch operation or a voice operation.
For example, when a presenter performs video sharing through a primary sharing device, a mobile terminal may play a video shared by the primary sharing device as a participant. When the user also wants to share the video in the process of participating in the video session, the user can click on the video sharing control, and at the moment, the mobile terminal exits the video playing interface, enters an application page of the target application and displays the application page. The mobile terminal can collect the application display of the application page and share the recorded application display with other terminals participating in the video session.
In one embodiment, a presenter may operate in a host sharing device to forward sharing rights to a user account corresponding to a mobile terminal. And the mobile terminal can detect whether the current screen display picture comprises an application display picture of the target application, and if so, the application display picture is acquired and shared.
In one embodiment, the video session in which the mobile terminal participates may be, in particular, a video session of a group. When the user account corresponding to the mobile terminal joins the video session, the server or the mobile terminal can determine the authority level corresponding to the user account, and when the user account has the operation authority of video sharing, the mobile terminal can trigger to generate a video sharing instruction and collect an application display picture for sharing.
For example, a presenter may share video data collected by a presenter sharing device via a video session. When the user account corresponding to the mobile terminal is the account of the group owner or the group manager of the group, the server can determine that the user account has the operation authority of video sharing.
In one embodiment, after receiving the video data shared by the main sharing device and the at least one mobile terminal, the other terminals participating in the video session may render and display the corresponding video data through different display windows, respectively. In this way, the participants of the video session can switch between different display windows via the terminal. In one embodiment, after receiving the video data shared by the main sharing device and at least one mobile terminal, other terminals participating in the video session render and display the main video data shared by the main sharing device on a display screen, and superimpose the video data shared by each mobile terminal on the main video data for superimposed rendering and display.
In one embodiment, after receiving the video data shared by the primary sharing device and the at least one mobile terminal, the other terminals participating in the video session may render and display the primary video data shared by the primary sharing device in the first layer. And displaying the video data shared by the mobile terminal in the second layer. The first layer and the second layer are different layers, and the first layer may be below the second layer or above the second layer, which is not limited in this embodiment of the present application.
In one embodiment, when the main video data shared by the main sharing device and the video data shared by the mobile terminals are displayed in a superposition rendering manner, the video data in the lower layer can be displayed in a semitransparent or full transparent manner according to a preset transparency in the rendering manner.
In the above embodiment, in the case that the mobile terminal and the main sharing device of the video session are different devices, the mobile terminal may respond to the video sharing instruction to detect whether the current screen display includes the application display of the target application, and when the current screen display includes the application display, the mobile terminal may share the application display. Therefore, even if the mobile terminal is not the main sharing device of the video session, the sharing of the application display picture can be realized, and the application scene of the video session is expanded.
In a specific application scenario, the video session method further includes a step of voice guidance operation, where the step specifically includes: acquiring conversation voice received in the process of participating in a video conversation; the session voice is triggered by the terminal to which the application display picture is shared; acquiring an operation instruction triggered on the basis of conversation voice; and executing corresponding operation in the target application according to the operation instruction.
Specifically, the mobile terminal may acquire a conversation voice received during participation in the video conversation, the conversation voice being triggered by the terminal to which the application display is shared. The user can execute corresponding operations in the application page of the target application according to the content of the conversation voice and the instruction of the conversation voice, such as entering a certain application page, clicking a certain function control or sending information based on the target application. Thus, customer service can be realized to guide the user to operate the target application.
Referring to fig. 5, in one embodiment, as shown in fig. 2, a video session method is provided, which is illustrated by way of example as being applied to the second terminal 130 in fig. 1, and includes the steps of:
step S502, a video session is participated in by a target application.
Specifically, the second terminal runs a target application, the second terminal can display a video session entry through the target application, and when a triggering operation acting on the video session entry is detected, the second terminal can initiate video session invitation to the mobile terminal where the group member or the contact person is located through the current login of the target application. And establishing a video session link between the mobile terminal and the group member or the contact person when a connection response triggered by the group member or the contact person is received, and participating in the video session based on the video session link.
It will be appreciated that the second terminal may also receive the video session invitation command sent by the mobile terminal, trigger a connection response based on the video session invitation command, and feed back the connection response to the second terminal, so that the mobile terminal and the second terminal may establish a video session link for conducting a video session.
Step S504, receiving video data sent by a mobile terminal corresponding to a member participating in the video session; and when the video data comprises an application display picture of the target application in the screen display picture through the mobile terminal, acquiring the application display picture of the target application.
Specifically, when the current screen display picture comprises an application display picture of the target application, the mobile terminal acquires the application display picture of the target application and obtains video data according to the acquired application display picture. The mobile terminal sends the video data to the server, which forwards the video data to the second terminal. The second terminal may receive video data collected by the mobile terminal. For the content related to the video data obtained by the mobile terminal capturing the display, reference may be made to the specific content related to step S204 in the foregoing embodiment.
In one embodiment, step S504, that is, the step of receiving video data sent by the mobile terminal corresponding to the member participating in the video session, specifically includes: receiving a mobile terminal corresponding to a member participating in a video session to send more than one data packet; the data packet comprises a coding packet obtained by coding the acquired video data and a redundant packet obtained by carrying out redundant processing on the coding packet; caching the received data packet; and sequentially recombining the data packets cached in the preset time period to obtain encoded data, and decoding the encoded data to obtain video data.
Specifically, when the mobile terminal encodes the video data and transmits the encoded video data, the second terminal receives the data packet transmitted by the mobile terminal. The data comprises a coding packet obtained by coding the acquired video data and a redundant packet obtained by carrying out redundant processing on the coding packet. The mobile terminal encodes the video data to obtain an encoded packet, and performs redundancy processing on the encoded packet to obtain the corresponding redundancy packet, which may be referred to as related description in the foregoing embodiment.
It can be understood that, after the network transmission, the data packet received by the second terminal may have a packet loss or an out-of-order condition. Thus, a buffer area can be set in the second terminal, and the second terminal can buffer the received data into the buffer area. And the second terminal can reorganize the data buffered in the preset time period according to the corresponding data packet numbers, restore the data to the correct sequence, reorganize the data into frames and send the frames to a video decoder for decoding processing to obtain video data.
In the above embodiment, the received data packets are buffered, so that the received data packets can be recombined in sequence and restored to the correct sequence, thereby realizing the decoding of the data packets.
Step S506, in the process of participating in the video session, video frames in the video data are rendered and displayed frame by frame.
Specifically, the second terminal may render and display the video frames in the received video data frame by frame during the process of participating in the video session.
In one embodiment, the mobile terminal may collect the application display in YUV format when collecting the application display of the target application. And the second terminal can obtain the corresponding uncompressed image in YUV420 format when decoding the display picture by the decoder. Further, the second terminal may call an image rendering interface of the system to render the uncompressed application display screen to a screen of the second terminal for display.
It will be appreciated that the second terminal may receive and play only video data shared by one mobile terminal, or may receive and play video data shared by a plurality of devices, including at least one mobile terminal. That is, when the mobile terminal is a primary sharing device, the second terminal may play video data shared by the mobile terminal, and when the mobile terminal and the primary sharing device are different devices, the second terminal may play video data shared by the mobile terminal and the primary sharing device, respectively. For how the second terminal displays these video data, reference may be made to the relevant content in the foregoing embodiments.
According to the video session method, the target application participates in the video session, video data sent by the mobile terminal where other members participating in the video session are located are received, and video frames in the received video data are rendered and displayed frame by frame. The video data are obtained by acquiring an application display picture of a target application when the mobile terminal comprises the application display picture of the target application in a screen display picture. That is, the video data does not include information of a notification bar or other application programs, and only includes screen data in the target application, so that other information of the sharer cannot be revealed, privacy information of the user is protected, and the sharer can safely share the screen data in the process of participating in the video session.
In a specific embodiment, referring to fig. 6, fig. 6 is a schematic diagram of an application of the video session method in one embodiment. As shown in fig. 6, when detecting that the current screen display includes an application display of a target application, the transmitting end, that is, the mobile terminal, acquires screen data in the target application to obtain a corresponding application display. And further add a watermark in the application display. The series of watermarked application display frames constitutes the video data. The mobile terminal may perform video encoding and adding of the redundant packet to the video data, and then transmit the encoded packet and the redundant packet to the receiving end, that is, the second terminal, through the network. The second terminal caches the received data packets through the receiving buffer area, and reorganizes the cached data packets in a preset time period according to the sequence, and then sends the data packets into the video decoder for video decoding to obtain video data. The second terminal may display the video rendering on a display screen of the second terminal by invoking a system interface. The technical scheme of sharing the screen by the mobile phone in the multi-user online conference is realized, the privacy and the additional watermark information of the user are protected, and the requirement of a richer video conference can be met.
The application scene also provides an application scene, and the application scene applies the video session method. Specifically, the application of the video session method in the application scene is as follows:
the user can carry out a conversation with the customer service personnel through the target application running on the mobile terminal, and in the conversation process, the customer service personnel can send video conversation invitation to the user through the second terminal by using the customer service account number. The user may click on the video session invitation to conduct a video session with the customer service personnel. In the process that a user participates in a video session through the mobile terminal, the mobile terminal can detect whether an application display picture of a target application is included in a screen display picture, when the application display picture is included, the current target application is indicated to be in foreground operation, and at the moment, the mobile terminal can call a screen acquisition interface to acquire screen data in the target application according to corresponding application parameters of the target application to obtain a corresponding application display picture. The mobile terminal can collect application display pictures according to a certain collection frequency, and mark information is added in the collected application display pictures. The series of application display screens to which the tag information is added constitutes video data. The mobile terminal can encode the video data through the video engine to obtain more than one encoded packet, and redundant processing is carried out on the encoded packet to obtain a corresponding redundant packet. In the process of adding the redundant packet, the data of the redundant packet can be dynamically changed according to the current network transmission information and the packet loss rate. The mobile terminal can sequentially send the coded packets and the redundant packets to the server, and then forward the coded packets and the redundant packets to a second terminal where customer service personnel are located through the server. If the server does not receive the data packet sent before, the server may request retransmission from the mobile terminal, and correspondingly, if the second terminal does not receive the data packet sent before, the second terminal may request retransmission from the server. Further, the second terminal caches the received data packets, and reorganizes the cached data packets in a preset time period according to the sequence, and then sends the data packets to the video decoder for decoding to obtain the application display image. The second terminal may render the uncompressed application display image for display on a display screen of the second terminal by invoking a system interface. Therefore, customer service personnel can watch an application interface of the target application of the user, and the remote guidance of the user to operate and use the target application can be realized.
The application scenario may be a specific operation in which a sharer presents a target application to other members participating in a video session. In this application scenario, the receiving party of the video data is the party to which the sharer is to share, and is not limited to customer service personnel. The scene is also suitable for the scene of the video conference, and the richer video conference requirement of the user is met.
The application also provides an application scene, wherein the video session can be a video conference or video live broadcast, and in the scene, the first user corresponding to the mobile terminal is a participant of the video session, and is not a presenter. That is, the mobile terminal may receive video data shared by the main sharing device to view a conference or live broadcast of a lecture by the lecturer. In the participation process, the mobile terminal can detect whether a video sharing instruction exists, the video sharing instruction can be specifically triggered by the main sharing device transferring the video sharing authority to the mobile terminal, or the server is triggered when judging that the operation authority of the user account logged in the mobile terminal has the video sharing authority, or the first user can operate the mobile terminal, and the like. When the mobile terminal detects the video sharing instruction, the mobile terminal can judge whether the currently displayed screen display picture comprises an application display picture of the target application, if so, the application display picture of the target application can be acquired to obtain video data, and then the video data is shared to the terminal corresponding to the target member participating in the video session. Thus, other users can view the application display screen shared by the first user. In this process, other session members may play the video shared by the main sharing device and the video shared by the mobile terminal at the same time on the terminal, or select one of the videos to play, which is not limited in the embodiment of the present application.
It should be understood that, although the steps in the flowcharts of fig. 2 and 5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in fig. 2 and 5 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the execution of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 7, a video session apparatus 700 is provided, which may employ software modules or hardware modules, or a combination of both, as part of a computer device, the apparatus specifically comprising: a first video session module 701, an acquisition module 702, and a transmission module 703, wherein:
a first video session module 701 for participating in a video session through a target application.
The acquisition module 702 is configured to acquire an application display screen of a target application to obtain video data when the application display screen of the target application is included in the screen display screen of the mobile terminal.
And the transmission module 703 is configured to transmit the video data to a terminal corresponding to a target member participating in the video session, so as to realize sharing of an application display screen of the target application in the process of participating in the video session.
In one embodiment, the acquisition module 702 is further configured to stop the operation of acquiring the application display screen when it is detected that the application display screen of the target application is not included in the screen display screen of the mobile terminal.
In one embodiment, the first video session module 701 is further configured to obtain a video session invitation instruction through the target application; responding to the video session invitation instruction, and displaying a response option through the target application; the video session is entered when a triggering operation of a response option for triggering a connection response occurs.
In one embodiment, the acquisition module 702 is further configured to acquire an application parameter corresponding to the target application when the screen display of the mobile terminal includes an application display of the target application; and according to the application parameters, acquiring an application display picture of the target application by calling a screen acquisition interface through the target application to obtain video data.
In one embodiment, the acquisition module 702 is further configured to determine, when an application display screen of the mobile terminal includes an application display screen of the target application, an application page corresponding to the currently displayed application display screen; when the application page is a target function page, according to page parameters of the application page, the screen acquisition interface is called by the target application to acquire an application display picture of the target application, so that video data are obtained.
In one embodiment, the acquisition module 702 is further configured to acquire an application display of the target application when the application display of the target application is included in the screen display of the mobile terminal; adding marking information in the acquired application display picture to generate a corresponding video frame; and determining video data corresponding to the target application according to the video frames generated by the display pictures of the applications.
In one embodiment, the video session apparatus 700 further includes a video encoding module 704, configured to encode the video data to obtain corresponding encoded data. The transmission module 703 is further configured to transmit the encoded data to a terminal corresponding to a target member participating in the video session.
In one embodiment, the video encoding module 704 is further configured to encode the video data to obtain more than one encoded packet; obtaining redundancy rate corresponding to the current moment, and performing redundancy processing on more than one coding packet according to the obtained redundancy rate to obtain a corresponding redundancy packet; and forming the coded data by the coded packets and the corresponding redundant packets in sequence.
In one embodiment, the video encoding module 704 is further configured to obtain network transmission information and packet loss rate when transmitting encoded data; the redundancy rate is adjusted according to at least one of network transmission information and packet loss rate, and the processing is performed according to the adjusted redundancy rate when the redundancy processing is performed on the subsequent encoded packet.
In one embodiment, the video encoding module 704 is further configured to sequentially allocate corresponding packet numbers for the encoded packets and the corresponding redundant packets; and forming coded data by the coded packets and the redundant packets according to corresponding data packet numbers.
In one embodiment, the video session apparatus 700 further includes a retransmission module 705 for receiving retransmission instructions; the retransmission instruction carries the data packet number corresponding to the missing data packet; in response to the retransmission instruction, retransmitting a data packet corresponding to the data carried in the retransmission instruction including the identifier; the data includes at least one of a coded packet and a redundant packet.
In one embodiment, the first video session module 701 is further configured to receive main video data transmitted by a main sharing device participating in a video session, and play the main video data in a display screen of the mobile terminal; and responding to the video sharing instruction, and detecting whether an application display picture of the target application is included in the current screen display picture of the mobile terminal.
As shown in fig. 8, in one embodiment, the video session apparatus 700 further includes an acquisition module 706 and an operation execution module 707, wherein: an obtaining module 706, configured to obtain a conversation voice received during participation in a video conversation; the session voice is triggered by the terminal to which the application display picture is shared; and acquiring an operation instruction triggered based on the conversation voice. And the operation execution module 707 is configured to execute a corresponding operation in the target application according to the operation instruction.
According to the video session device, the mobile terminal is used for carrying out video session, and in the process of the video session, if the current screen display picture of the mobile terminal comprises the application display picture of the target application, the mobile terminal can directly acquire the application display picture of the target application to obtain video data, so that the video data is transmitted to the terminal where the target member participating in the video session is located. In this way, in the process of the video session, the sharer can share the screen data in the target application with the target members participating in the video session without revealing other information of the user, thereby protecting the privacy information of the user and realizing the safe sharing of the screen data in the process of participating in the video session.
In one embodiment, as shown in fig. 9, a video session apparatus 900 is provided, which may employ software modules or hardware modules, or a combination of both, as part of a computer device, and specifically includes: a second video session module 901, a receiving module 902, and a display module 903, wherein:
a second video session module 901 for participating in a video session through a target application.
A receiving module 902, configured to receive video data sent by a mobile terminal corresponding to a member participating in a video session; and when the video data comprises an application display picture of the target application in the screen display picture through the mobile terminal, acquiring the application display picture of the target application.
The display module 903 is configured to render and display video frames in the video data frame by frame during the process of participating in the video session.
In one embodiment, the receiving module 902 is further configured to receive that the mobile terminal corresponding to the member participating in the video session sends more than one data packet; the data packet comprises a coding packet obtained by coding the acquired video data and a redundant packet obtained by carrying out redundant processing on the coding packet; caching the received data packet; and sequentially recombining the data packets cached in the preset time period to obtain encoded data, and decoding the encoded data to obtain video data.
The video session device participates in the video session through the target application, receives video data sent by the mobile terminal where other members participating in the video session are located, and renders and displays video frames in the received video data frame by frame. The video data are obtained by acquiring an application display picture of a target application when the mobile terminal comprises the application display picture of the target application in a screen display picture. That is, the video data does not include information of a notification bar or other application programs, and only includes screen data in the target application, so that other information of the sharer cannot be revealed, privacy information of the user is protected, and the sharer can safely share the screen data in the process of participating in the video session.
For specific limitations of the video session apparatus, reference may be made to the above limitations of the video session method, and no further description is given here. The various modules in the video session apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a mobile terminal, and an internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a video session method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (30)

1. A video session method, applied to a mobile terminal, the method comprising:
participating in a video session through a target application;
when the screen display picture of the mobile terminal comprises the application display picture of the target application, determining an application page corresponding to the currently displayed application display picture, and determining the page category of the application page; the page categories are used for distinguishing application pages corresponding to different target functions;
When the page category is a preset page category to which the sharable application page belongs, page parameters of the application page are transmitted when a screen acquisition interface is called so as to acquire an application display picture corresponding to the application page, and video data is formed based on the application display picture acquired in real time;
and transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session.
2. The method according to claim 1, wherein the method further comprises:
and stopping the operation of collecting the application display picture when detecting that the application display picture of the target application is not included in the screen display picture of the mobile terminal.
3. The method of claim 1, wherein the participating in the video session by the target application comprises:
acquiring a video session invitation instruction through a target application;
responsive to the video session invitation instruction, presenting, by the target application, a response option;
the video session is entered when a triggering operation of a response option for triggering a connection response occurs.
4. The method according to claim 1, wherein the method further comprises:
when an application display picture of the target application is included in a screen display picture of the mobile terminal, acquiring application parameters corresponding to the target application;
acquiring an application display picture of the target application through the target application calling screen acquisition interface according to the application parameters to obtain video data;
and transmitting the video data acquired according to the application parameters to a terminal corresponding to the target member participating in the video session.
5. The method of claim 1, wherein constructing video data based on the real-time acquired application display comprises:
adding marking information in the acquired application display picture to generate a corresponding video frame;
and determining video data corresponding to the target application according to the video frames generated by the display pictures of the applications.
6. The method of claim 1, wherein prior to transmitting the video data to the terminal corresponding to the target member participating in the video session, the method further comprises:
coding the video data to obtain corresponding coded data;
The transmitting the video data to a terminal corresponding to a target member participating in the video session includes:
and transmitting the encoded data to a terminal corresponding to a target member participating in the video session.
7. The method of claim 6, wherein the encoding the video data to obtain corresponding encoded data comprises:
coding the video data to obtain more than one coding packet;
obtaining redundancy rate corresponding to the current moment, and performing redundancy processing on the more than one coding packet according to the obtained redundancy rate to obtain a corresponding redundancy packet;
and forming the coded data by the coded packets and the corresponding redundant packets in sequence.
8. The method of claim 7, wherein the method further comprises:
acquiring network transmission information and packet loss rate when the coded data are transmitted;
and adjusting the redundancy rate according to at least one of the network transmission information and the packet loss rate, and processing according to the adjusted redundancy rate when performing redundancy processing on the subsequent coded packet.
9. The method of claim 7, wherein said sequentially constructing the encoded packets and the corresponding redundant packets into encoded data comprises:
Sequentially distributing corresponding data packet numbers for the coded packets and the corresponding redundant packets;
and forming coded data by the coded packets and the redundant packets according to corresponding data packet numbers.
10. The method according to claim 9, wherein the method further comprises:
receiving a retransmission instruction; the retransmission instruction carries a data packet number corresponding to the missing data packet;
retransmitting a data packet corresponding to the data packet number carried in the retransmission instruction in response to the retransmission instruction; the data includes at least one of the encoded packet and the redundant packet.
11. The method according to claim 1, wherein when the application display of the target application is included in the screen display of the mobile terminal, before capturing the application display of the target application to obtain video data, the method further comprises:
receiving main video data transmitted by a main sharing device participating in the video session, and playing the main video data in a display screen of the mobile terminal;
and responding to the video sharing instruction, and detecting whether an application display picture of the target application is included in the current screen display picture of the mobile terminal.
12. The method according to any one of claims 1 to 11, further comprising:
acquiring conversation voice received in the process of participating in the video conversation; the session voice is triggered by the terminal to which the application display picture is shared;
acquiring an operation instruction triggered based on the conversation voice;
and executing corresponding operation in the target application according to the operation instruction.
13. A video session method, the method further comprising:
participating in a video session through a target application;
receiving video data sent by a mobile terminal corresponding to a member participating in the video session; when the video data comprises an application display picture of the target application in a screen display picture through the mobile terminal, when the page type of an application page corresponding to the displayed application display picture is a preset page type of a sharable application page, page parameters of the application page are transmitted when a screen acquisition interface is called, so that the application display picture corresponding to the application page is acquired; the page categories are used for distinguishing application pages corresponding to different target functions;
And in the process of participating in the video session, rendering and displaying video frames in the video data frame by frame.
14. The method according to claim 13, wherein receiving video data transmitted by a mobile terminal corresponding to a member participating in the video session comprises:
receiving a mobile terminal corresponding to a member participating in the video session to send more than one data packet; the data packet comprises a coding packet obtained by coding the acquired video data and a redundant packet obtained by carrying out redundant processing on the coding packet;
caching the received data packet;
and sequentially recombining the data packets cached in the preset time period to obtain encoded data, and decoding the encoded data to obtain video data.
15. A video session apparatus for use with a mobile terminal, the apparatus comprising:
the first video session module is used for participating in the video session through the target application;
the acquisition module is used for determining an application page corresponding to the currently displayed application display picture and determining the page type of the application page when the screen display picture of the mobile terminal comprises the application display picture of the target application; the page categories are used for distinguishing application pages corresponding to different target functions; when the page category is a preset page category to which the sharable application page belongs, page parameters of the application page are transmitted when a screen acquisition interface is called so as to acquire an application display picture corresponding to the application page, and video data is formed based on the application display picture acquired in real time;
And the transmission module is used for transmitting the video data to a terminal corresponding to a target member participating in the video session so as to realize sharing of an application display picture of the target application in the process of participating in the video session.
16. The apparatus of claim 15, wherein the acquisition module is further configured to stop an operation of acquiring an application display when it is detected that the application display of the target application is not included in the screen display of the mobile terminal.
17. The apparatus of claim 15, wherein the first video session module is further configured to obtain a video session invitation instruction via a target application; responsive to the video session invitation instruction, presenting, by the target application, a response option; the video session is entered when a triggering operation of a response option for triggering a connection response occurs.
18. The apparatus of claim 15, wherein the acquisition module is further configured to acquire an application parameter corresponding to the target application when an application display screen of the target application is included in a screen display screen of the mobile terminal; acquiring an application display picture of the target application through the target application calling screen acquisition interface according to the application parameters to obtain video data;
The transmission module is further configured to transmit the video data collected according to the application parameters to a terminal corresponding to a target member participating in the video session.
19. The apparatus of claim 15, wherein the acquisition module is further configured to add marker information to the acquired application display to generate a corresponding video frame; and determining video data corresponding to the target application according to the video frames generated by the display pictures of the applications.
20. The apparatus of claim 15, further comprising a video encoding module configured to encode the video data to obtain corresponding encoded data;
the transmission module is further configured to transmit the encoded data to a terminal corresponding to a target member participating in the video session.
21. The apparatus of claim 20, wherein the video encoding module is further configured to encode the video data to obtain more than one encoded packet; obtaining redundancy rate corresponding to the current moment, and performing redundancy processing on the more than one coding packet according to the obtained redundancy rate to obtain a corresponding redundancy packet; and forming the coded data by the coded packets and the corresponding redundant packets in sequence.
22. The apparatus of claim 21, wherein the video encoding module is further configured to obtain network transmission information and a packet loss rate when transmitting the encoded data; and adjusting the redundancy rate according to at least one of the network transmission information and the packet loss rate, and processing according to the adjusted redundancy rate when performing redundancy processing on the subsequent coded packet.
23. The apparatus of claim 21, wherein the video encoding module is further configured to sequentially assign corresponding packet numbers to the encoded packets and the corresponding redundant packets; and forming coded data by the coded packets and the redundant packets according to corresponding data packet numbers.
24. The apparatus of claim 23, wherein the apparatus further comprises:
the retransmission module is used for receiving the retransmission instruction; the retransmission instruction carries a data packet number corresponding to the missing data packet; retransmitting a data packet corresponding to the data packet number carried in the retransmission instruction in response to the retransmission instruction; the data includes at least one of the encoded packet and the redundant packet.
25. The apparatus of claim 15, wherein the first video session module is further configured to receive main video data transmitted by a main sharing device participating in the video session and play the main video data in a display screen of the mobile terminal; and responding to the video sharing instruction, and detecting whether an application display picture of the target application is included in the current screen display picture of the mobile terminal.
26. The apparatus according to any one of claims 15 to 25, further comprising:
the acquisition module is used for acquiring conversation voice received in the process of participating in the video conversation; the session voice is triggered by the terminal to which the application display picture is shared; acquiring an operation instruction triggered based on the conversation voice;
and the operation execution module is used for executing corresponding operation in the target application according to the operation instruction.
27. A video session apparatus, the apparatus further comprising:
the second video session module is used for participating in the video session through the target application;
the receiving module is used for receiving video data sent by the mobile terminal corresponding to the member participating in the video session; when the video data comprises an application display picture of the target application in a screen display picture through the mobile terminal, when the page type of an application page corresponding to the displayed application display picture is a preset page type of a sharable application page, page parameters of the application page are transmitted when a screen acquisition interface is called, so that the application display picture corresponding to the application page is acquired; the page categories are used for distinguishing application pages corresponding to different target functions;
And the display module is used for rendering and displaying the video frames in the video data frame by frame in the process of participating in the video session.
28. The apparatus of claim 27, wherein the receiving module is further configured to receive a packet sent by more than one mobile terminal corresponding to a member participating in the video session; the data packet comprises a coding packet obtained by coding the acquired video data and a redundant packet obtained by carrying out redundant processing on the coding packet; caching the received data packet; and sequentially recombining the data packets cached in the preset time period to obtain encoded data, and decoding the encoded data to obtain video data.
29. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 14 when the computer program is executed.
30. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 14.
CN202010435958.9A 2020-05-21 2020-05-21 Video session method Active CN113709577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010435958.9A CN113709577B (en) 2020-05-21 2020-05-21 Video session method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010435958.9A CN113709577B (en) 2020-05-21 2020-05-21 Video session method

Publications (2)

Publication Number Publication Date
CN113709577A CN113709577A (en) 2021-11-26
CN113709577B true CN113709577B (en) 2023-05-23

Family

ID=78645815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010435958.9A Active CN113709577B (en) 2020-05-21 2020-05-21 Video session method

Country Status (1)

Country Link
CN (1) CN113709577B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185630B (en) * 2021-11-29 2024-04-23 招联消费金融股份有限公司 Screen recording method, device, computer equipment and storage medium
CN114760291B (en) * 2022-06-14 2022-09-13 深圳乐播科技有限公司 File processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101484886A (en) * 2005-12-29 2009-07-15 网讯公司 Methods and apparatuses for dynamically sharing a portion of a display for application based screen sampling
CN103475850A (en) * 2013-08-14 2013-12-25 深圳市华视瑞通信息技术有限公司 Window shield identification method for sharing application program
CN103595715A (en) * 2013-11-08 2014-02-19 腾讯科技(成都)有限公司 Information sharing method and device for desktop live broadcasting
US9549152B1 (en) * 2014-06-09 2017-01-17 Google Inc. Application content delivery to multiple computing environments using existing video conferencing solutions
CN108874258A (en) * 2017-05-11 2018-11-23 腾讯科技(深圳)有限公司 Share the method and device of record screen video
CN108933965A (en) * 2017-05-26 2018-12-04 腾讯科技(深圳)有限公司 screen content sharing method, device and storage medium
CN109862301A (en) * 2019-02-25 2019-06-07 北京云中融信网络科技有限公司 Screen video sharing method, device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101484886A (en) * 2005-12-29 2009-07-15 网讯公司 Methods and apparatuses for dynamically sharing a portion of a display for application based screen sampling
CN103475850A (en) * 2013-08-14 2013-12-25 深圳市华视瑞通信息技术有限公司 Window shield identification method for sharing application program
CN103595715A (en) * 2013-11-08 2014-02-19 腾讯科技(成都)有限公司 Information sharing method and device for desktop live broadcasting
US9549152B1 (en) * 2014-06-09 2017-01-17 Google Inc. Application content delivery to multiple computing environments using existing video conferencing solutions
CN108874258A (en) * 2017-05-11 2018-11-23 腾讯科技(深圳)有限公司 Share the method and device of record screen video
CN108933965A (en) * 2017-05-26 2018-12-04 腾讯科技(深圳)有限公司 screen content sharing method, device and storage medium
CN109862301A (en) * 2019-02-25 2019-06-07 北京云中融信网络科技有限公司 Screen video sharing method, device and electronic equipment

Also Published As

Publication number Publication date
CN113709577A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
US10419618B2 (en) Information processing apparatus having whiteboard and video conferencing functions
US9153207B2 (en) Utilizing scrolling detection for screen content encoding
US9729825B2 (en) Method for generating an immersive video of a plurality of persons
US8860776B2 (en) Conference terminal, conference server, conference system and data processing method
WO2020238441A1 (en) Multi-terminal screen projection method, computer device and storage medium
RU2637469C2 (en) Method, device and system of implementing video conferencing calls based on unified communication
US20200145356A1 (en) Apparatus and method for managing sharing of content?
CN113709577B (en) Video session method
CN103597468A (en) Systems and methods for improved interactive content sharing in video communication systems
CN111880695B (en) Screen sharing method, device, equipment and storage medium
US20140028778A1 (en) Systems and methods for ad-hoc integration of tablets and phones in video communication systems
WO2014101428A1 (en) Image control method and terminal, video conference apparatus
US10164784B2 (en) Communication terminal, communication system, and data transmission method
US8917309B1 (en) Key frame distribution in video conferencing
US8255461B1 (en) Efficient transmission of changing images using image caching
US20150346838A1 (en) Performing multiple functions by a mobile device during a video conference
EP3151481B1 (en) Communication terminal, communication system, and output method
CN113573004A (en) Video conference processing method and device, computer equipment and storage medium
CN112153412B (en) Control method and device for switching video images, computer equipment and storage medium
TW201528821A (en) System and method of controlling video conference based on IP
US20080181303A1 (en) System and method for video compression
CN107846399B (en) Method for distributing and receiving multimedia content and system for processing multimedia content
KR102445944B1 (en) Method for involving user in video conference using qr code and method for participating in video conference using qr code
US20240089410A1 (en) Method of allowing user to participate in video conference using qr code and method of participating, by user, in video conference using qr code
KR20120050258A (en) Video conference system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant