CN110708492A - Video conference content interaction method and system - Google Patents

Video conference content interaction method and system Download PDF

Info

Publication number
CN110708492A
CN110708492A CN201910864403.3A CN201910864403A CN110708492A CN 110708492 A CN110708492 A CN 110708492A CN 201910864403 A CN201910864403 A CN 201910864403A CN 110708492 A CN110708492 A CN 110708492A
Authority
CN
China
Prior art keywords
video stream
data
auxiliary video
content
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910864403.3A
Other languages
Chinese (zh)
Inventor
刘宇剑
薛建清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Star Network Intelligent Software Co Ltd
Original Assignee
Fujian Star Network Intelligent Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Star Network Intelligent Software Co Ltd filed Critical Fujian Star Network Intelligent Software Co Ltd
Priority to CN201910864403.3A priority Critical patent/CN110708492A/en
Publication of CN110708492A publication Critical patent/CN110708492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Abstract

The invention provides a video conference content interaction method in the technical field of video conferences, which comprises the following steps: step S10, each participant terminal constructs a data interaction channel through a double-stream protocol, and transmits a main video stream and a first auxiliary video stream through the data interaction channel; step S20, each participant terminal marks the first auxiliary video stream in the conference process to generate marking data, and the marking data is packaged and then sent to the participant terminals sharing the content; step S30, the participating terminals sharing the content receive and unpack the tag data, and superimpose the tag data adjusted and the first auxiliary video stream to synthesize a second auxiliary video stream, and share the second auxiliary video stream to each participating terminal; the invention also provides a video conference content interaction system. The invention has the advantages that: the system resource consumption of video conference content interaction is reduced, the compatibility of the system is improved, and the marks are more fit to the selected range.

Description

Video conference content interaction method and system
Technical Field
The invention relates to the technical field of video conferences, in particular to a video conference content interaction method and system.
Background
The video conference is a multimedia communication technology which enables people in different places to realize real-time, visual and interactive through a certain transmission medium, and various information such as static/dynamic images, voice, characters, pictures and the like of people are distributed to terminal equipment of each user through various existing communication transmission media, so that the users scattered in the places can exchange information together through various modes such as graphics, sound and the like, the understanding ability of multiple parties on the content is improved, and the conference is just as if the conference is carried out in the same meeting place.
Along with the development of video technology, the demands of users are more and more abundant, and a video conference system is required to not only provide video transmission, but also mark videos, so that the practicability of video conference content interaction is improved. However, the conventional video conference content interaction method has the following disadvantages:
1. the marked content of the video needs additional negotiation and channel establishment for transmission, and excessive system resources are consumed; 2. the video and the mark are respectively transmitted, and the terminal equipment receives the video and the mark and then performs superposition display, so that the terminal equipment has the capability of analyzing the mark, and the adaptive scene is limited; 3. the marks are not adjusted and optimized, so that the situation that the selected range cannot be well attached exists when the terminal equipment receives the videos and the marks and then displays the videos in an overlapping mode.
Therefore, how to provide a method and a system for video conference content interaction to reduce the system resource consumption of video conference content interaction, improve compatibility, and make the mark more fit the selected range becomes a problem to be solved urgently.
Disclosure of Invention
One of the technical problems to be solved by the present invention is to provide a method for video conference content interaction, which reduces the system resource consumption of video conference content interaction, improves compatibility, and makes the mark more fit to the selected range.
The invention realizes one of the technical problems as follows: a video conference content interaction method comprises the following steps:
step S10, each participant terminal constructs a data interaction channel through a double-stream protocol, and transmits a main video stream and a first auxiliary video stream through the data interaction channel;
step S20, each participant terminal marks the first auxiliary video stream in the conference process to generate marking data, and the marking data is packaged and then sent to the participant terminals sharing the content;
and step S30, the participating terminals sharing the content receive and unpack the tag data, and superimpose the tag data adjusted and the first auxiliary video stream to synthesize a second auxiliary video stream, and share the second auxiliary video stream to each participating terminal.
Further, the step S10 is specifically:
each participating terminal constructs a main video stream channel and an auxiliary video stream channel through a double-stream protocol, transmits the main video stream through the main video stream channel and transmits the auxiliary video stream through the auxiliary video stream channel.
Further, the step S20 specifically includes:
step S21, each participant terminal marks the first auxiliary video stream to generate mark data in the conference process, and encapsulates the mark data based on the encapsulation protocol;
step S22, judging whether the auxiliary video stream channel is idle, if so, sending the packaged marking data to a participant terminal sharing the content, and entering step S30; if not, go to step S23;
step S23, the packaged flag data is locally saved, and the process proceeds to step S22 after a set time period.
Further, the step S30 specifically includes:
step S31, after the participant terminal sharing the content receives the encapsulated tag data, decapsulating the encapsulated tag data to obtain tag data;
step S32, the participant terminal sharing the content acquires the mark range based on the mark data, and carries out the comparison identification according to the mark range and the content of the auxiliary video stream, carries out the self-adaptive adjustment of the position and the size of the mark, and generates the adjusted mark data;
step S33, overlapping the adjusted marking data and the first auxiliary video stream to synthesize a second auxiliary video stream;
and step S34, sharing the second auxiliary video stream to each participating terminal through the auxiliary video stream channel.
Further, the marking data is specifically the color and coordinates of each pixel generated by the marking.
The second technical problem to be solved by the present invention is to provide a video conference content interaction system, which can reduce the system resource consumption of video conference content interaction, improve compatibility, and make the mark more fit to the selected range.
The invention realizes the second technical problem in the following way: a video conference content interaction system, comprising the following modules:
the video transmission module is used for constructing a data interaction channel by each participant terminal through a double-stream protocol and transmitting a main video stream and a first auxiliary video stream through the data interaction channel;
the video marking module is used for marking each participant terminal on the first auxiliary video stream in the conference process to generate marking data, and packaging the marking data and then sending the marking data to the participant terminals sharing the content;
and the video synthesis module is used for receiving and de-encapsulating the marked data by the participating terminals sharing the content, adjusting the marked data, overlapping the adjusted marked data with the first auxiliary video stream to synthesize a second auxiliary video stream, and sharing the second auxiliary video stream to each participating terminal.
Further, the video transmission module specifically includes:
each participating terminal constructs a main video stream channel and an auxiliary video stream channel through a double-stream protocol, transmits the main video stream through the main video stream channel and transmits the auxiliary video stream through the auxiliary video stream channel.
Further, the video tagging module specifically includes:
the marking data generating unit is used for marking the first auxiliary video stream to generate marking data in the conference process of each conference terminal, and packaging the marking data based on a packaging protocol;
the mark data sending unit is used for judging whether the auxiliary video stream channel is idle, if so, sending the packaged mark data to a participating terminal of the shared content, and entering a video synthesis module; if not, entering a mark data retransmission unit;
and the mark data retransmission unit is used for locally storing the packaged mark data and entering the mark data sending unit after a set time period.
Further, the video composition module specifically includes:
the mark data decapsulation unit is used for decapsulating the encapsulated mark data to obtain the mark data after the participant terminal sharing the content receives the encapsulated mark data;
the system comprises a mark data adjusting unit and a participant terminal for sharing content, wherein the participant terminal acquires a mark range based on mark data, performs comparison identification according to the mark range and the content of an auxiliary video stream, performs self-adaptive adjustment on the position and the size of a mark, and generates adjusted mark data;
the mark data synthesis unit is used for superposing the adjusted mark data and the first auxiliary video stream to synthesize a second auxiliary video stream;
and the video sharing unit is used for sharing the second auxiliary video stream to each participating terminal through the auxiliary video stream channel.
Further, the marking data is specifically the color and coordinates of each pixel generated by the marking.
The invention has the advantages that:
1. the data interaction channel is constructed through the double-flow protocol, because the auxiliary video stream channel in the data interaction channel is in an idle state after the auxiliary video stream is transmitted, the marked data is transmitted based on the idle auxiliary video stream channel, additional negotiation is avoided, the channel is established for transmission, the consumption of system resources for video conference content interaction is greatly reduced, interaction and port occupation are reduced, and the problem that connection cannot be established due to port limitation in some environments is avoided.
2. The marked data are sent to the participant terminals sharing the content, the participant terminals sharing the second auxiliary video stream with the first auxiliary video stream to synthesize the second auxiliary video stream in an overlapping mode, the participant terminals sharing the content share the second auxiliary video stream to the participant terminals, the participant terminals only need to decode the second auxiliary video stream to watch the second auxiliary video stream, the capacity of analyzing the marked data is not needed, the compatibility of the system is greatly improved, and the video auxiliary video stream can be adapted to more scenes.
3. By adjusting the marking data before synthesizing the second auxiliary video stream, the marks are more fit to the selected range, and the user experience is greatly improved.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
Fig. 1 is a flowchart of a video conference content interaction method according to the present invention.
Fig. 2 is a schematic diagram of the content interaction of the participating terminals of the present invention.
Detailed Description
Referring to fig. 1 to fig. 2, a preferred embodiment of a video conference content interaction method according to the present invention includes the following steps:
step S10, each participant terminal constructs a data interaction channel through a double-stream protocol, and transmits a main video stream and a first auxiliary video stream through the data interaction channel; the data interaction channel is constructed through the double-flow protocol, because the auxiliary video stream channel in the data interaction channel is in an idle state after the auxiliary video stream is transmitted, the marked data is transmitted based on the idle auxiliary video stream channel, additional negotiation is avoided, the channel is established for transmission, the consumption of system resources for video conference content interaction is greatly reduced, interaction and port occupation are reduced, and the problem that connection cannot be established due to port limitation in some environments is avoided.
Step S20, each participant terminal marks the first auxiliary video stream in the conference process to generate marking data, and the marking data is packaged and then sent to the participant terminals sharing the content;
and step S30, the participating terminals sharing the content receive and unpack the tag data, and superimpose the tag data adjusted and the first auxiliary video stream to synthesize a second auxiliary video stream, and share the second auxiliary video stream to each participating terminal. The marked data are sent to the participant terminals sharing the content, the participant terminals sharing the second auxiliary video stream with the first auxiliary video stream to synthesize the second auxiliary video stream in an overlapping mode, the participant terminals sharing the content share the second auxiliary video stream to the participant terminals, the participant terminals only need to decode the second auxiliary video stream to watch the second auxiliary video stream, the capacity of analyzing the marked data is not needed, the compatibility of the system is greatly improved, and the video auxiliary video stream can be adapted to more scenes. By adjusting the marking data before synthesizing the second auxiliary video stream, the marks are more fit to the selected range, and the user experience is greatly improved.
The step S10 specifically includes:
each participating terminal constructs a main video stream channel and an auxiliary video stream channel through a double-stream protocol, transmits the main video stream through the main video stream channel and transmits the auxiliary video stream through the auxiliary video stream channel. The double-stream protocol is an H.239 protocol, namely two paths of media streams are transmitted between two H.239 terminals after media connection is established in one call, and the two media streams share call bandwidth. The auxiliary video stream channel is a bidirectional channel and supports data receiving and sending, and the auxiliary video stream channel enters an idle state after transmission of the auxiliary video stream is completed.
The step S20 specifically includes:
step S21, setting a time period, marking each participant terminal on the first auxiliary video stream to generate marking data in the conference process, and packaging the marking data based on a packaging protocol; the encapsulation protocol is PPP/HDLC, LAPS or GFP;
step S22, judging whether the auxiliary video stream channel is idle, if so, sending the packaged marking data to a participant terminal sharing the content, and entering step S30; if not, go to step S23;
and step S23, locally saving the packaged marking data, and entering step S22 after a set time period, wherein the time period can be adjusted and set according to actual conditions.
The step S30 specifically includes:
step S31, after the participant terminal sharing the content receives the encapsulated tag data, decapsulating the encapsulated tag data to obtain tag data;
step S32, the participant terminal sharing the content acquires the mark range based on the mark data, and carries out the comparison identification according to the mark range and the content of the auxiliary video stream, carries out the self-adaptive adjustment of the position and the size of the mark, and generates the adjusted mark data; for example, if a small part of the content in the marking range is identified to be located outside the marking range, the marking range is expanded, and a content frame with the small part located outside the marking range is selected;
step S33, overlapping the adjusted marking data and the first auxiliary video stream to synthesize a second auxiliary video stream;
and step S34, sharing the second auxiliary video stream to each participating terminal through the auxiliary video stream channel.
The marking data is specifically the color and coordinates of each pixel generated by the marking.
The second preferred embodiment of the video conference content interaction method of the present invention comprises the following steps:
step S10, each participant terminal constructs a main video flow channel and an auxiliary video flow channel through a double-flow protocol, transmits a real-time video shot by a camera through the main video flow channel, and transmits a PPT shared video through the auxiliary video flow channel;
step S20, each participant terminal marks the PPT shared video in the conference process to generate marking data, and the marking data is packaged and then sent to the participant terminals sharing the content; the marking data is specifically the color and coordinates of each pixel generated by marking;
and step S30, the participant terminals sharing the content receive and unpack the tag data, and superimpose the tag data and the PPT shared video after adjustment to synthesize a new PPT shared video, and share the new PPT shared video to each participant terminal. The tag data overlaid with the PPT shared video also includes tag data generated by the participant terminals sharing the content.
The step S20 specifically includes:
step S21, in the conference process, each participant terminal selects key contents on the PPT shared video to generate marking data, and the marking data are packaged based on a packaging protocol;
step S22, judging whether the auxiliary video stream channel is idle, if so, sending the packaged marking data to a participant terminal sharing the content, and entering step S30; if not, go to step S23;
step S23, the packaged tag data is locally saved, and step S22 is performed at an interval of 1S.
The step S30 specifically includes:
step S31, after the participant terminal sharing the content receives the encapsulated tag data, decapsulating the encapsulated tag data to obtain tag data;
step S32, the participant terminals sharing the content acquire the mark range based on the mark data, and compare and identify the mark range and the content of the PPT shared video, and adjust the mark data in a self-adaptive manner, so that the mark data fits the selected range better;
step S33, overlapping the adjusted marking data and the PPT shared video to synthesize a new PPT shared video;
and step S34, sharing the new PPT shared video to each participant terminal through the auxiliary video stream channel.
The invention discloses a preferred embodiment of a video conference content interaction system, which comprises the following modules:
the video transmission module is used for constructing a data interaction channel by each participant terminal through a double-stream protocol and transmitting a main video stream and a first auxiliary video stream through the data interaction channel; the data interaction channel is constructed through the double-flow protocol, because the auxiliary video stream channel in the data interaction channel is in an idle state after the auxiliary video stream is transmitted, the marked data is transmitted based on the idle auxiliary video stream channel, additional negotiation is avoided, the channel is established for transmission, the consumption of system resources for video conference content interaction is greatly reduced, interaction and port occupation are reduced, and the problem that connection cannot be established due to port limitation in some environments is avoided.
The video marking module is used for marking each participant terminal on the first auxiliary video stream in the conference process to generate marking data, and packaging the marking data and then sending the marking data to the participant terminals sharing the content;
and the video synthesis module is used for receiving and de-encapsulating the marked data by the participating terminals sharing the content, adjusting the marked data, overlapping the adjusted marked data with the first auxiliary video stream to synthesize a second auxiliary video stream, and sharing the second auxiliary video stream to each participating terminal. The marked data are sent to the participant terminals sharing the content, the participant terminals sharing the second auxiliary video stream with the first auxiliary video stream to synthesize the second auxiliary video stream in an overlapping mode, the participant terminals sharing the content share the second auxiliary video stream to the participant terminals, the participant terminals only need to decode the second auxiliary video stream to watch the second auxiliary video stream, the capacity of analyzing the marked data is not needed, the compatibility of the system is greatly improved, and the video auxiliary video stream can be adapted to more scenes. By adjusting the marking data before synthesizing the second auxiliary video stream, the marks are more fit to the selected range, and the user experience is greatly improved.
The video transmission module specifically comprises:
each participating terminal constructs a main video stream channel and an auxiliary video stream channel through a double-stream protocol, transmits the main video stream through the main video stream channel and transmits the auxiliary video stream through the auxiliary video stream channel. The double-stream protocol is an H.239 protocol, namely two paths of media streams are transmitted between two H.239 terminals after media connection is established in one call, and the two media streams share call bandwidth. The auxiliary video stream channel is a bidirectional channel and supports data receiving and sending, and the auxiliary video stream channel enters an idle state after transmission of the auxiliary video stream is completed.
The video marking module specifically comprises:
the system comprises a marking data generation unit, a conference processing unit and a conference processing unit, wherein the marking data generation unit is used for setting a time period, marking is carried out on a first auxiliary video stream by each conference terminal in the conference process to generate marking data, and the marking data are packaged based on a packaging protocol; the encapsulation protocol is PPP/HDLC, LAPS or GFP;
the mark data sending unit is used for judging whether the auxiliary video stream channel is idle, if so, sending the packaged mark data to a participating terminal of the shared content, and entering a video synthesis module; if not, entering a mark data retransmission unit;
the mark data retransmission unit is used for locally storing the packaged mark data, and the mark data enters the mark data transmission unit after a set time period, wherein the time period can be adjusted and set according to the actual situation.
The video synthesis module specifically comprises:
the mark data decapsulation unit is used for decapsulating the encapsulated mark data to obtain the mark data after the participant terminal sharing the content receives the encapsulated mark data;
the system comprises a mark data adjusting unit and a participant terminal for sharing content, wherein the participant terminal acquires a mark range based on mark data, performs comparison identification according to the mark range and the content of an auxiliary video stream, performs self-adaptive adjustment on the position and the size of a mark, and generates adjusted mark data; for example, if a small part of the content in the marking range is identified to be located outside the marking range, the marking range is expanded, and a content frame with the small part located outside the marking range is selected;
the mark data synthesis unit is used for superposing the adjusted mark data and the first auxiliary video stream to synthesize a second auxiliary video stream;
and the video sharing unit is used for sharing the second auxiliary video stream to each participating terminal through the auxiliary video stream channel.
The marking data is specifically the color and coordinates of each pixel generated by the marking.
In summary, the invention has the advantages that:
1. the data interaction channel is constructed through the double-flow protocol, because the auxiliary video stream channel in the data interaction channel is in an idle state after the auxiliary video stream is transmitted, the marked data is transmitted based on the idle auxiliary video stream channel, additional negotiation is avoided, the channel is established for transmission, the consumption of system resources for video conference content interaction is greatly reduced, interaction and port occupation are reduced, and the problem that connection cannot be established due to port limitation in some environments is avoided.
2. The marked data are sent to the participant terminals sharing the content, the participant terminals sharing the second auxiliary video stream with the first auxiliary video stream to synthesize the second auxiliary video stream in an overlapping mode, the participant terminals sharing the content share the second auxiliary video stream to the participant terminals, the participant terminals only need to decode the second auxiliary video stream to watch the second auxiliary video stream, the capacity of analyzing the marked data is not needed, the compatibility of the system is greatly improved, and the video auxiliary video stream can be adapted to more scenes.
3. By adjusting the marking data before synthesizing the second auxiliary video stream, the marks are more fit to the selected range, and the user experience is greatly improved.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.

Claims (10)

1. A video conference content interaction method is characterized in that: the method comprises the following steps:
step S10, each participant terminal constructs a data interaction channel through a double-stream protocol, and transmits a main video stream and a first auxiliary video stream through the data interaction channel;
step S20, each participant terminal marks the first auxiliary video stream in the conference process to generate marking data, and the marking data is packaged and then sent to the participant terminals sharing the content;
and step S30, the participating terminals sharing the content receive and unpack the tag data, and superimpose the tag data adjusted and the first auxiliary video stream to synthesize a second auxiliary video stream, and share the second auxiliary video stream to each participating terminal.
2. The video conference content interaction method according to claim 1, wherein: the step S10 specifically includes:
each participating terminal constructs a main video stream channel and an auxiliary video stream channel through a double-stream protocol, transmits the main video stream through the main video stream channel and transmits the auxiliary video stream through the auxiliary video stream channel.
3. The video conference content interaction method according to claim 2, wherein: the step S20 specifically includes:
step S21, each participant terminal marks the first auxiliary video stream to generate mark data in the conference process, and encapsulates the mark data based on the encapsulation protocol;
step S22, judging whether the auxiliary video stream channel is idle, if so, sending the packaged marking data to a participant terminal sharing the content, and entering step S30; if not, go to step S23;
step S23, the packaged flag data is locally saved, and the process proceeds to step S22 after a set time period.
4. The video conference content interaction method according to claim 2, wherein: the step S30 specifically includes:
step S31, after the participant terminal sharing the content receives the encapsulated tag data, decapsulating the encapsulated tag data to obtain tag data;
step S32, the participant terminal sharing the content acquires the mark range based on the mark data, and carries out the comparison identification according to the mark range and the content of the auxiliary video stream, carries out the self-adaptive adjustment of the position and the size of the mark, and generates the adjusted mark data;
step S33, overlapping the adjusted marking data and the first auxiliary video stream to synthesize a second auxiliary video stream;
and step S34, sharing the second auxiliary video stream to each participating terminal through the auxiliary video stream channel.
5. The video conference content interaction method according to claim 1, wherein: the marking data is specifically the color and coordinates of each pixel generated by the marking.
6. A video conference content interaction system, characterized by: the system comprises the following modules:
the video transmission module is used for constructing a data interaction channel by each participant terminal through a double-stream protocol and transmitting a main video stream and a first auxiliary video stream through the data interaction channel;
the video marking module is used for marking each participant terminal on the first auxiliary video stream in the conference process to generate marking data, and packaging the marking data and then sending the marking data to the participant terminals sharing the content;
and the video synthesis module is used for receiving and de-encapsulating the marked data by the participating terminals sharing the content, adjusting the marked data, overlapping the adjusted marked data with the first auxiliary video stream to synthesize a second auxiliary video stream, and sharing the second auxiliary video stream to each participating terminal.
7. The video conferencing content interaction system of claim 6, wherein: the video transmission module specifically comprises:
each participating terminal constructs a main video stream channel and an auxiliary video stream channel through a double-stream protocol, transmits the main video stream through the main video stream channel and transmits the auxiliary video stream through the auxiliary video stream channel.
8. The video conferencing content interaction system of claim 7, wherein: the video marking module specifically comprises:
the marking data generating unit is used for marking the first auxiliary video stream to generate marking data in the conference process of each conference terminal, and packaging the marking data based on a packaging protocol;
the mark data sending unit is used for judging whether the auxiliary video stream channel is idle, if so, sending the packaged mark data to a participating terminal of the shared content, and entering a video synthesis module; if not, entering a mark data retransmission unit;
and the mark data retransmission unit is used for locally storing the packaged mark data and entering the mark data sending unit after a set time period.
9. The video conferencing content interaction system of claim 7, wherein: the video synthesis module specifically comprises:
the mark data decapsulation unit is used for decapsulating the encapsulated mark data to obtain the mark data after the participant terminal sharing the content receives the encapsulated mark data;
the system comprises a mark data adjusting unit and a participant terminal for sharing content, wherein the participant terminal acquires a mark range based on mark data, performs comparison identification according to the mark range and the content of an auxiliary video stream, performs self-adaptive adjustment on the position and the size of a mark, and generates adjusted mark data;
the mark data synthesis unit is used for superposing the adjusted mark data and the first auxiliary video stream to synthesize a second auxiliary video stream;
and the video sharing unit is used for sharing the second auxiliary video stream to each participating terminal through the auxiliary video stream channel.
10. The video conferencing content interaction system of claim 6, wherein: the marking data is specifically the color and coordinates of each pixel generated by the marking.
CN201910864403.3A 2019-09-12 2019-09-12 Video conference content interaction method and system Pending CN110708492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910864403.3A CN110708492A (en) 2019-09-12 2019-09-12 Video conference content interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910864403.3A CN110708492A (en) 2019-09-12 2019-09-12 Video conference content interaction method and system

Publications (1)

Publication Number Publication Date
CN110708492A true CN110708492A (en) 2020-01-17

Family

ID=69196157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910864403.3A Pending CN110708492A (en) 2019-09-12 2019-09-12 Video conference content interaction method and system

Country Status (1)

Country Link
CN (1) CN110708492A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887655A (en) * 2021-01-25 2021-06-01 联想(北京)有限公司 Information processing method and information processing device
WO2023246328A1 (en) * 2022-06-24 2023-12-28 京东方科技集团股份有限公司 Video conference marking method and system, and terminal, server and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572794A (en) * 2008-10-20 2009-11-04 深圳华为通信技术有限公司 Conference terminal, conference server, conference system and data processing method
CN102572218A (en) * 2012-01-16 2012-07-11 唐桥科技(杭州)有限公司 Video label method based on network video meeting system
CN104038722A (en) * 2013-03-06 2014-09-10 中兴通讯股份有限公司 Content interaction method and content interaction system for video conference
US20160353057A1 (en) * 2015-06-01 2016-12-01 Apple Inc. Techniques to overcome communication lag between terminals performing video mirroring and annotation operations
CN106791574A (en) * 2016-12-16 2017-05-31 联想(北京)有限公司 Video labeling method, device and video conferencing system
CN107332845A (en) * 2017-07-03 2017-11-07 努比亚技术有限公司 Meeting projection annotation method, mobile terminal and computer-readable recording medium
CN109889759A (en) * 2019-02-02 2019-06-14 视联动力信息技术股份有限公司 A kind of exchange method and system regarding networked video meeting
CN109951673A (en) * 2019-03-11 2019-06-28 南京信奥弢电子科技有限公司 A kind of the content interactive system and method for video conference

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572794A (en) * 2008-10-20 2009-11-04 深圳华为通信技术有限公司 Conference terminal, conference server, conference system and data processing method
CN102572218A (en) * 2012-01-16 2012-07-11 唐桥科技(杭州)有限公司 Video label method based on network video meeting system
CN104038722A (en) * 2013-03-06 2014-09-10 中兴通讯股份有限公司 Content interaction method and content interaction system for video conference
US20160353057A1 (en) * 2015-06-01 2016-12-01 Apple Inc. Techniques to overcome communication lag between terminals performing video mirroring and annotation operations
CN106791574A (en) * 2016-12-16 2017-05-31 联想(北京)有限公司 Video labeling method, device and video conferencing system
CN107332845A (en) * 2017-07-03 2017-11-07 努比亚技术有限公司 Meeting projection annotation method, mobile terminal and computer-readable recording medium
CN109889759A (en) * 2019-02-02 2019-06-14 视联动力信息技术股份有限公司 A kind of exchange method and system regarding networked video meeting
CN109951673A (en) * 2019-03-11 2019-06-28 南京信奥弢电子科技有限公司 A kind of the content interactive system and method for video conference

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887655A (en) * 2021-01-25 2021-06-01 联想(北京)有限公司 Information processing method and information processing device
CN112887655B (en) * 2021-01-25 2022-05-31 联想(北京)有限公司 Information processing method and information processing device
WO2023246328A1 (en) * 2022-06-24 2023-12-28 京东方科技集团股份有限公司 Video conference marking method and system, and terminal, server and storage medium

Similar Documents

Publication Publication Date Title
US20110131498A1 (en) Presentation method and presentation system using identification label
CN105763832B (en) A kind of video interactive, control method and device
CN103686432B (en) A kind of screen sharing method and system based on looking networking
US9124765B2 (en) Method and apparatus for performing a video conference
CN101370114B (en) Video and audio processing method, multi-point control unit and video conference system
US20050208962A1 (en) Mobile phone, multimedia chatting system and method thereof
US8908006B2 (en) Method, terminal and system for caption transmission in telepresence
US20080036849A1 (en) Apparatus for image display and control method thereof
CN101437140B (en) Multi-picture transmission method and multi-point control unit
US20040001091A1 (en) Method and apparatus for video conferencing system with 360 degree view
JP2007527128A (en) Mixing media streams
WO2021168649A1 (en) Multifunctional receiving device and conference system
CN110708492A (en) Video conference content interaction method and system
CN111147801A (en) Video data processing method and device for video networking terminal
WO2015003532A1 (en) Multimedia conferencing establishment method, device and system
CN102118602A (en) Method and system for displaying auxiliary streaming video in multiple pictures
US9357173B2 (en) Method and terminal for transmitting information
CN102202206B (en) Communication equipment
CN110659080B (en) Page display method and device, electronic equipment and storage medium
WO2022211327A1 (en) Method and apparatus for providing conversational services in mobile communication system
CN111385590A (en) Live broadcast data processing method and device and terminal
CN110740286A (en) video conference control method, multipoint control unit and video conference terminal
JP2021140783A (en) Mediation method and computer-readable recording medium
WO2021198550A1 (en) A method, an apparatus and a computer program product for streaming conversational omnidirectional video
TWI811148B (en) Method for achieving latency-reduced one-to-many communication based on surrounding video and associated computer program product set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200117