CN109688365B - Video conference processing method and computer-readable storage medium - Google Patents
Video conference processing method and computer-readable storage medium Download PDFInfo
- Publication number
- CN109688365B CN109688365B CN201811615835.2A CN201811615835A CN109688365B CN 109688365 B CN109688365 B CN 109688365B CN 201811615835 A CN201811615835 A CN 201811615835A CN 109688365 B CN109688365 B CN 109688365B
- Authority
- CN
- China
- Prior art keywords
- grouping
- video conference
- video
- information
- terminal equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention provides a video conference processing method and a computer readable storage medium, wherein the video conference processing method comprises the following steps: receiving attribute information uploaded by terminal equipment requesting for a video conference; grouping all the terminal devices participating in the video conference according to a preset grouping template and attribute information; and creating at least one voice synthesis process and/or at least one image synthesis process for the terminal equipment in any one of the grouped groups. Through the technical scheme of the invention, the video conference can be logically grouped simply and conveniently, the operation steps of users are simplified, and the continuity and reliability of the video conference are improved.
Description
Technical Field
The present invention relates to the field of video conference technologies, and in particular, to a video conference processing method and a computer-readable storage medium.
Background
With the rapid development of Internet Protocol (IP) networks and multimedia communication technologies, users can participate in conferences, decisions, and the like through a video conference system implemented by an audio video multimedia technology without visiting the scene, which makes the video conference system widely applied in various industries.
In a video conference system, a plurality of terminal devices distributed in various places are summoned into a same conference by using a display device such as a television and a video conference terminal and a Multipoint Control Unit (MCU for short), and participants discuss related issues by transmitting audio and video and data, so as to achieve the purpose of on-site interaction and communication.
In the related art, especially in the process of holding a large video conference, some topics are often required to be discussed in groups, and after the discussion is finished, centralized discussion and decision making are performed. This situation usually requires the conference administrator to close the ongoing conference and then assemble the associated terminal devices into an MCU control packet according to the request of the packet, and the processing scheme of the video conference at least includes the following technical drawbacks:
(1) the operation steps are very complicated and fussy, and great inconvenience and long waiting time are brought to users of the video conference;
(2) for a large conference television system, MCUs distributed in multiple stages are needed, grouping is essentially a physical isolation scheme, and the quality of a video conference is possibly poor due to the limitations of the self bandwidth and communication quality of the MCUs;
(3) in the multi-level distributed MCU cluster, the MCU of the parent stage not only needs to process the transceiving of the audio and video and data of the group, but also undertakes the transceiving of the audio and video and data of the MCU of the child stage, resulting in a large amount of redundant data in the MCU of the parent stage and higher time delay.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art or the related art.
To this end, it is an object of the present invention to provide a method for processing a video conference.
It is another object of the present invention to provide a computer-readable storage medium.
Another object of the present invention is to provide a terminal device.
It is another object of the present invention to provide a computer-readable storage medium.
In order to achieve the above object, according to an embodiment of the first aspect of the present invention, there is provided a processing method for a video conference, including: receiving attribute information uploaded by terminal equipment requesting for a video conference; grouping all the terminal devices participating in the video conference according to a preset grouping template and attribute information; and creating at least one voice synthesis process and/or at least one image synthesis process for the terminal equipment in any one of the grouped groups.
In the technical scheme, attribute information uploaded by terminal equipment requesting for carrying out the video conference is received, and all the terminal equipment participating in the video conference is grouped according to a preset grouping template and the attribute information, the method is a logical grouping scheme essentially, and cascaded MCUs are not required to be set for grouping control, so that a user is not required to carry out grouping operation and MCU hierarchy setting operation, the operation steps of the user are greatly simplified, in addition, because the MCU does not have a hierarchy relation, the problem of redundant data of a parent-level MCU is solved, and in the execution process of the video conference, the conference can be automatically grouped and divided according to the attribute information without pausing.
In addition, at least one voice synthesis process and/or at least one image synthesis process are created for the terminal equipment in any one of the grouped groups, the voice synthesis process can be packaged into an audio synthesizer, and the audio synthesizer is used for mixing multiple paths of voice data into one path of voice data, so that all participants in the group can simultaneously hear a plurality of speaking sounds in the group. The video composition process may be encapsulated as a video compositor that is configured to composite multiple video frames into a multi-split screen frame, thereby enabling all participants in the group to view multiple video frames within the group at the same time.
And finally, the frame number, the volume, the definition and the picture layout of the terminal equipment in the same group can be set according to the attribute information so as to distinguish the use effect of different terminal equipment in the same group in the video conference process.
Specifically, the attribute information mainly includes hardware information of the mobile terminal including an identification code of the mobile terminal, a device model, and a code of a communication standard protocol, and individual information of a user using the mobile terminal including a name, an age, a native place, a job number, a department, a region, and a title of the user.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: receiving a grouping time node of a video conference uploaded by a designated terminal device and a grouping template corresponding to any grouping time node; and pre-storing the grouping template as a preset grouping template, wherein the preset grouping template is used for dividing the information which accords with the same rule in the attribute information into one type according to the sequence indication of the grouping time nodes and dividing the terminal equipment corresponding to the information which is divided into one type into one group.
In the technical scheme, by receiving the packet time node of the video conference uploaded by the appointed terminal equipment, and a grouping template corresponding to any grouping time node, pre-storing the grouping template as a preset grouping template, setting the corresponding grouping time node according to a preset video conference process, grouping all the terminal devices participating in the video conference according to a preset grouping template when reaching the corresponding grouping time node, for example, after the first-stage conference is finished, detecting that an administrator or voice excitation triggers the grouping time node, then all the participated terminal devices are logically grouped according to a grouping template corresponding to the first preset grouping time node, the time when the free group discussion ends is taken as a second preset grouping time node, when detecting that the administrator or voice stimulus triggers the grouping time node again, all the participating terminal devices are merged into the same group.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: receiving grouping scene information of a video conference uploaded by appointed terminal equipment and a grouping condition corresponding to any grouping scene information; pre-storing the grouping condition as a preset grouping template; the preset grouping template is used for grouping the terminal devices corresponding to the information meeting the grouping condition into a group when the video information meeting the grouping condition in the video conference process is detected.
In the technical scheme, the grouping scene information of the video conference uploaded by the appointed terminal equipment and the grouping condition corresponding to any grouping scene information are received, the grouping condition is prestored as a preset grouping template, namely, another more flexible grouping mode is provided for logic grouping, and after all the terminal equipment participate in the video conference, an administrator can trigger the grouping condition at any time to carry out grouping, for example, conference participants in the same department are divided into the same group.
In any of the above technical solutions, preferably, grouping all terminal devices participating in the video conference according to a preset grouping template and attribute information includes: analyzing the attribute information to determine an attribute identifier of the terminal device and identifier information of a user using the terminal device to perform a video conference; recording the time and video information of the video conference in real time in the continuous process of the video conference; when the time of the video conference reaches the grouping time node, grouping the terminal devices corresponding to the information classified into one type into one group according to the rule corresponding to the grouping time node; or when detecting that the video information in the video conference process meets the grouping condition, grouping the terminal devices corresponding to the information meeting the grouping condition into a group.
According to the technical scheme, the time and the video information of the video conference are recorded in real time in the continuous process of the video conference, and when the fact that the time of the video conference reaches a grouping time node is detected, terminal equipment corresponding to information classified into one type is classified into one group according to rules corresponding to the grouping time node, or when the fact that the video information in the process of the video conference meets the grouping condition is detected, terminal equipment corresponding to information meeting the grouping condition is classified into one group, grouping operation steps of the video conference are simplified, a cascade MCU does not need to be arranged for grouping, and redundant data and network data interaction pressure in the MCU is reduced.
In any of the above technical solutions, preferably, creating at least one speech synthesis process and/or at least one video synthesis process for the terminal devices in any of the grouped groups specifically includes: after grouping of terminal equipment participating in the video conference is completed, detecting whether a specified voice excitation signal or a specified image excitation signal or a specified control instruction is received or not; after detecting that a terminal device of any group generates a specified voice excitation signal or a specified video excitation signal or a specified control instruction, creating at least one voice synthesis process and/or at least one video synthesis process for the terminal device in any group, wherein the voice synthesis process is used for coding and synthesizing voice resources generated by all terminal devices in any group into a voice signal, and the video synthesis process is used for coding and synthesizing video resources generated by all terminal devices in any group into a video signal.
In the technical scheme, after a terminal device of any group is detected to generate a specified voice excitation signal or a specified video excitation signal or a specified control instruction, at least one voice synthesis process and/or at least one video synthesis process are/is created for the terminal device in any group, namely, the voice synthesis and the video synthesis are respectively carried out on the video information in each group, so that the communication quality and the user experience of the video conference are improved while the transmission of redundant data is reduced.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: and accessing all the terminal devices participating in the video conference by adopting a distributed technology to form a flat video conference structure, wherein the terminal devices in the flat video conference structure can share and transmit communication messages, and the communication messages comprise any one of audio information, video information and text information.
In the technical scheme, all terminal devices participating in the video conference are accessed by adopting a distributed technology, and communication messages can be shared and transmitted among the terminal devices in the flattened video conference structure, on one hand, each terminal device can work as an independent processor to share the operation pressure in a video system (comprising the server and a plurality of terminal devices participating in the video conference), and on the other hand, the flattened video conference structure breaks through the cascaded MCU hierarchy, so that the grouping mode is more flexible and timely, and the use experience of users is promoted.
In any of the above technical solutions, preferably, the method further includes: determining terminal equipment except participating in the video conference as external terminal equipment; and responding to the communication message sent by the external terminal equipment, and sharing the communication message to at least one specified terminal equipment in the flattened video conference structure according to a control instruction of a user of the terminal equipment.
In the technical scheme, in response to the communication message sent by the external terminal device, the communication message is shared to at least one designated terminal device in the flattened video conference structure according to the control instruction of the user of the terminal device, so that the communication message outside the video system can be shared to the video conference, and the use experience of the user is further improved.
In any of the above technical solutions, preferably, the method further includes: receiving a communication message sent by appointed terminal equipment in a flattened video conference structure; and sending the communication message to at least one corresponding external terminal device according to the control instruction of the user of the terminal device.
In the technical scheme, by receiving the communication message sent by the appointed terminal device in the flattened video conference structure and sending the communication message to the corresponding at least one external terminal device according to the control instruction of the user of the terminal device, the content of the video conference can be sent to the external terminal device in time, and particularly the terminal device which is inconvenient to participate in the video conference.
In any of the above technical solutions, preferably, the method further includes: and after grouping the terminal equipment, performing nested grouping on the terminal equipment in the group according to the attribute information.
In the technical scheme, after the terminal devices are grouped, the terminal devices in the group are nested and grouped according to the attribute information, namely, any group of the terminal devices can also comprise other groups, so that the technical scheme further enriches the modes of the video conference, and reduces the hardware cost compared with the traditional video conference.
According to an aspect of the second aspect of the present invention, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed implements a processing method of a video conference as defined in any one of the above aspects.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 shows a schematic flow diagram of a method of processing a video conference according to an embodiment of the invention;
fig. 2 shows a schematic block diagram of a processing device for a video conference according to an embodiment of the present invention;
FIG. 3 shows a schematic block diagram of a terminal device according to an embodiment of the invention;
FIG. 4 is an architectural diagram illustrating a prior art processing scheme for video conferencing;
FIG. 5 shows an architectural diagram of a processing scheme for a video conference according to one embodiment of the invention;
fig. 6 shows an architectural diagram of a processing scheme for a video conference according to another embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
The first embodiment is as follows:
fig. 1 shows a schematic flow diagram of a processing method of a video conference according to an embodiment of the invention.
As shown in fig. 1, a method for processing a video conference according to an embodiment of the present invention includes: step S102, receiving attribute information uploaded by terminal equipment requesting for video conference; step S104, grouping all terminal devices participating in the video conference according to a preset grouping template and attribute information; step S106, at least one voice synthesis process and/or at least one image synthesis process are created for the terminal equipment in any one of the grouped groups.
In the technical scheme, attribute information uploaded by terminal equipment requesting for carrying out the video conference is received, and all the terminal equipment participating in the video conference is grouped according to a preset grouping template and the attribute information, the method is a logical grouping scheme essentially, and cascaded MCUs are not required to be set for grouping control, so that a user is not required to carry out grouping operation and MCU hierarchy setting operation, the operation steps of the user are greatly simplified, in addition, because the MCU does not have a hierarchy relation, the problem of redundant data of a parent-level MCU is solved, and in the execution process of the video conference, the conference can be automatically grouped and divided according to the attribute information without pausing.
In addition, at least one voice synthesis process and/or at least one image synthesis process are created for the terminal equipment in any one of the grouped groups, the voice synthesis process can be packaged into an audio synthesizer, and the audio synthesizer is used for mixing multiple paths of voice data into one path of voice data, so that all participants in the group can simultaneously hear a plurality of speaking sounds in the group. The video composition process may be encapsulated as a video compositor that is configured to composite multiple video frames into a multi-split screen frame, thereby enabling all participants in the group to view multiple video frames within the group at the same time.
And finally, the frame number, the volume, the definition and the picture layout of the terminal equipment in the same group can be set according to the attribute information so as to distinguish the use effect of different terminal equipment in the same group in the video conference process.
Specifically, the attribute information mainly includes hardware information of the mobile terminal including an identification code of the mobile terminal, a device model, and a code of a communication standard protocol, and individual information of a user using the mobile terminal including a name, an age, a native place, a job number, a department, a region, and a title of the user.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: receiving a grouping time node of a video conference uploaded by a designated terminal device and a grouping template corresponding to any grouping time node; and pre-storing the grouping template as a preset grouping template, wherein the preset grouping template is used for dividing the information which accords with the same rule in the attribute information into one type according to the sequence indication of the grouping time nodes and dividing the terminal equipment corresponding to the information which is divided into one type into one group.
In the technical scheme, by receiving the packet time node of the video conference uploaded by the appointed terminal equipment, and a grouping template corresponding to any grouping time node, pre-storing the grouping template as a preset grouping template, setting the corresponding grouping time node according to a preset video conference process, grouping all the terminal devices participating in the video conference according to a preset grouping template when reaching the corresponding grouping time node, for example, after the first-stage conference is finished, detecting that an administrator or voice excitation triggers the grouping time node, then all the participated terminal devices are logically grouped according to a grouping template corresponding to the first preset grouping time node, the time when the free group discussion ends is taken as a second preset grouping time node, when detecting that the administrator or voice stimulus triggers the grouping time node again, all the participating terminal devices are merged into the same group.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: receiving grouping scene information of a video conference uploaded by appointed terminal equipment and a grouping condition corresponding to any grouping scene information; pre-storing the grouping condition as a preset grouping template; the preset grouping template is used for grouping the terminal devices corresponding to the information meeting the grouping condition into a group when the video information meeting the grouping condition in the video conference process is detected.
In the technical scheme, the grouping scene information of the video conference uploaded by the appointed terminal equipment and the grouping condition corresponding to any grouping scene information are received, the grouping condition is prestored as a preset grouping template, namely, another more flexible grouping mode is provided for logic grouping, and after all the terminal equipment participate in the video conference, an administrator can trigger the grouping condition at any time to carry out grouping, for example, conference participants in the same department are divided into the same group.
In any of the above technical solutions, preferably, grouping all terminal devices participating in the video conference according to a preset grouping template and attribute information includes: analyzing the attribute information to determine an attribute identifier of the terminal device and identifier information of a user using the terminal device to perform a video conference; recording the time and video information of the video conference in real time in the continuous process of the video conference; when the time of the video conference reaches the grouping time node, grouping the terminal devices corresponding to the information classified into one type into one group according to the rule corresponding to the grouping time node; or when detecting that the video information in the video conference process meets the grouping condition, grouping the terminal devices corresponding to the information meeting the grouping condition into a group.
According to the technical scheme, the time and the video information of the video conference are recorded in real time in the continuous process of the video conference, and when the fact that the time of the video conference reaches a grouping time node is detected, terminal equipment corresponding to information classified into one type is classified into one group according to rules corresponding to the grouping time node, or when the fact that the video information in the process of the video conference meets the grouping condition is detected, terminal equipment corresponding to information meeting the grouping condition is classified into one group, grouping operation steps of the video conference are simplified, a cascade MCU does not need to be arranged for grouping, and redundant data and network data interaction pressure in the MCU is reduced.
In any of the above technical solutions, preferably, creating at least one speech synthesis process and/or at least one video synthesis process for the terminal devices in any of the grouped groups specifically includes: after grouping of terminal equipment participating in the video conference is completed, detecting whether a specified voice excitation signal or a specified image excitation signal or a specified control instruction is received or not; after detecting that a terminal device of any group generates a specified voice excitation signal or a specified video excitation signal or a specified control instruction, creating at least one voice synthesis process and/or at least one video synthesis process for the terminal device in any group, wherein the voice synthesis process is used for coding and synthesizing voice resources generated by all terminal devices in any group into a voice signal, and the video synthesis process is used for coding and synthesizing video resources generated by all terminal devices in any group into a video signal.
In the technical scheme, after a terminal device of any group is detected to generate a specified voice excitation signal or a specified video excitation signal or a specified control instruction, at least one voice synthesis process and/or at least one video synthesis process are/is created for the terminal device in any group, namely, the voice synthesis and the video synthesis are respectively carried out on the video information in each group, so that the communication quality and the user experience of the video conference are improved while the transmission of redundant data is reduced.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: and accessing all the terminal devices participating in the video conference by adopting a distributed technology to form a flat video conference structure, wherein the terminal devices in the flat video conference structure can share and transmit communication messages, and the communication messages comprise any one of audio information, video information and text information.
In the technical scheme, all terminal devices participating in the video conference are accessed by adopting a distributed technology, and communication messages can be shared and transmitted among the terminal devices in the flattened video conference structure, on one hand, each terminal device can work as an independent processor to share the operation pressure in a video system (comprising the server and a plurality of terminal devices participating in the video conference), and on the other hand, the flattened video conference structure breaks through the cascaded MCU hierarchy, so that the grouping mode is more flexible and timely, and the use experience of users is promoted.
In any of the above technical solutions, preferably, the method further includes: determining terminal equipment except participating in the video conference as external terminal equipment; and responding to the communication message sent by the external terminal equipment, and sharing the communication message to at least one specified terminal equipment in the flattened video conference structure according to a control instruction of a user of the terminal equipment.
In the technical scheme, in response to the communication message sent by the external terminal device, the communication message is shared to at least one designated terminal device in the flattened video conference structure according to the control instruction of the user of the terminal device, so that the communication message outside the video system can be shared to the video conference, and the use experience of the user is further improved.
In any of the above technical solutions, preferably, the method further includes: receiving a communication message sent by appointed terminal equipment in a flattened video conference structure; and sending the communication message to at least one corresponding external terminal device according to the control instruction of the user of the terminal device.
In the technical scheme, by receiving the communication message sent by the appointed terminal device in the flattened video conference structure and sending the communication message to the corresponding at least one external terminal device according to the control instruction of the user of the terminal device, the content of the video conference can be sent to the external terminal device in time, and particularly the terminal device which is inconvenient to participate in the video conference.
In any of the above technical solutions, preferably, the method further includes: and after grouping the terminal equipment, performing nested grouping on the terminal equipment in the group according to the attribute information.
In the technical scheme, after the terminal devices are grouped, the terminal devices in the group are nested and grouped according to the attribute information, namely, any group of the terminal devices can also comprise other groups, so that the technical scheme further enriches the modes of the video conference, and reduces the hardware cost compared with the traditional video conference.
Example two:
fig. 2 shows a schematic block diagram of a processing device for a video conference according to an embodiment of the present invention.
As shown in fig. 2, a processing apparatus 200 for video conference according to an embodiment of the present invention includes: a receiving unit 202, configured to receive attribute information uploaded by a terminal device that requests a video conference; a grouping unit 204, configured to group all terminal devices participating in the video conference according to a preset grouping template and attribute information; a creating unit 206, configured to create at least one speech synthesis process and/or at least one video synthesis process for the terminal devices in any one of the grouped groups.
In the technical scheme, attribute information uploaded by terminal equipment requesting for carrying out the video conference is received, and all the terminal equipment participating in the video conference is grouped according to a preset grouping template and the attribute information, the method is a logical grouping scheme essentially, and cascaded MCUs are not required to be set for grouping control, so that a user is not required to carry out grouping operation and MCU hierarchy setting operation, the operation steps of the user are greatly simplified, in addition, because the MCU does not have a hierarchy relation, the problem of redundant data of a parent-level MCU is solved, and in the execution process of the video conference, the conference can be automatically grouped and divided according to the attribute information without pausing.
In addition, at least one voice synthesis process and/or at least one image synthesis process are created for the terminal equipment in any one of the grouped groups, the voice synthesis process can be packaged into an audio synthesizer, and the audio synthesizer is used for mixing multiple paths of voice data into one path of voice data, so that all participants in the group can simultaneously hear a plurality of speaking sounds in the group. The video composition process may be encapsulated as a video compositor that is configured to composite multiple video frames into a multi-split screen frame, thereby enabling all participants in the group to view multiple video frames within the group at the same time.
And finally, the frame number, the volume, the definition and the picture layout of the terminal equipment in the same group can be set according to the attribute information so as to distinguish the use effect of different terminal equipment in the same group in the video conference process.
It is worth particularly pointing out that the above logical grouping scheme may also be nested, that is, any group of terminal devices may further include other groups, so that the technical scheme according to the present invention further enriches the mode of the video conference, and reduces the hardware cost compared with the conventional video conference.
Specifically, the attribute information mainly includes hardware information of the mobile terminal including an identification code of the mobile terminal, a device model, and a code of a communication standard protocol, and individual information of a user using the mobile terminal including a name, an age, a native place, a job number, a department, a region, and a title of the user.
In any of the above technical solutions, preferably, the receiving unit 202 is further configured to: receiving a grouping time node of a video conference uploaded by a designated terminal device and a grouping template corresponding to any grouping time node; the processing apparatus 200 for video conferencing further comprises: the storage unit 208 is configured to pre-store the grouping template as a preset grouping template, where the preset grouping template is configured to classify information meeting the same rule in the attribute information into one class according to an order indication of grouping time nodes, and is configured to classify terminal devices corresponding to the information classified into one class into one group.
In the technical scheme, by receiving the packet time node of the video conference uploaded by the appointed terminal equipment, and a grouping template corresponding to any grouping time node, pre-storing the grouping template as a preset grouping template, setting the corresponding grouping time node according to a preset video conference process, grouping all the terminal devices participating in the video conference according to a preset grouping template when reaching the corresponding grouping time node, for example, after the first-stage conference is finished, detecting that an administrator or voice excitation triggers the grouping time node, then all the participated terminal devices are logically grouped according to a grouping template corresponding to the first preset grouping time node, the time when the free group discussion ends is taken as a second preset grouping time node, when detecting that the administrator or voice stimulus triggers the grouping time node again, all the participating terminal devices are merged into the same group.
In any of the above technical solutions, preferably, the receiving unit 202 is further configured to: receiving grouping scene information of a video conference uploaded by appointed terminal equipment and a grouping condition corresponding to any grouping scene information; the processing apparatus 200 for video conferencing further comprises: a storage unit 208, configured to pre-store the grouping condition as a preset grouping template; the preset grouping template is used for grouping the terminal devices corresponding to the information meeting the grouping condition into a group when the video information meeting the grouping condition in the video conference process is detected.
In the technical scheme, the grouping scene information of the video conference uploaded by the appointed terminal equipment and the grouping condition corresponding to any grouping scene information are received, the grouping condition is prestored as a preset grouping template, namely, another more flexible grouping mode is provided for logic grouping, and after all the terminal equipment participate in the video conference, an administrator can trigger the grouping condition at any time to carry out grouping, for example, conference participants in the same department are divided into the same group.
In any of the above technical solutions, preferably, the grouping unit 204 specifically includes: the parsing subunit 2042 is configured to parse the attribute information to determine an attribute identifier of the terminal device and identifier information of a user using the terminal device to perform a video conference; the recording subunit 2044 is configured to record, in real time, time and video information of the video conference during the duration of the video conference; a first detecting subunit 2046, configured to, when it is detected that the time of the video conference reaches the grouping time node, group the terminal devices corresponding to the information classified into one class into a group according to a rule corresponding to the grouping time node; the second detecting subunit 2048 is configured to, when it is detected that the video information in the video conference process meets the grouping condition, group the terminal devices corresponding to the information that meets the grouping condition into a group.
According to the technical scheme, the time and the video information of the video conference are recorded in real time in the continuous process of the video conference, and when the fact that the time of the video conference reaches a grouping time node is detected, terminal equipment corresponding to information classified into one type is classified into one group according to rules corresponding to the grouping time node, or when the fact that the video information in the process of the video conference meets the grouping condition is detected, terminal equipment corresponding to information meeting the grouping condition is classified into one group, grouping operation steps of the video conference are simplified, a cascade MCU does not need to be arranged for grouping, and redundant data and network data interaction pressure in the MCU is reduced.
In any of the above technical solutions, preferably, the creating unit 206 specifically includes: an interaction subunit 2062, configured to detect whether a specified voice excitation signal or a specified video excitation signal or a specified control instruction is received after the grouping of the terminal devices participating in the video conference is completed; a synthesizing subunit 2064, configured to create at least one speech synthesis process and/or at least one image synthesis process for the terminal devices in any group after detecting that a terminal device in any group generates a specified speech excitation signal or a specified image excitation signal or a specified control instruction, where the speech synthesis process is used to encode and synthesize speech resources generated by all terminal devices in any group into a single speech signal, and the image synthesis process is used to encode and synthesize image resources generated by all terminal devices in any group into a single image signal.
In the technical scheme, after a terminal device of any group is detected to generate a specified voice excitation signal or a specified video excitation signal or a specified control instruction, at least one voice synthesis process and/or at least one video synthesis process are/is created for the terminal device in any group, namely, the voice synthesis and the video synthesis are respectively carried out on the video information in each group, so that the communication quality and the user experience of the video conference are improved while the transmission of redundant data is reduced.
In any of the above technical solutions, preferably, the receiving unit 202 is further configured to: and accessing all the terminal devices participating in the video conference by adopting a distributed technology to form a flat video conference structure, wherein the terminal devices in the flat video conference structure can share and transmit communication messages, and the communication messages comprise any one of audio information, video information and text information.
In the technical scheme, all terminal devices participating in the video conference are accessed by adopting a distributed technology, and communication messages can be shared and transmitted among the terminal devices in the flattened video conference structure, on one hand, each terminal device can work as an independent processor to share the operation pressure in a video system (comprising the server and a plurality of terminal devices participating in the video conference), and on the other hand, the flattened video conference structure breaks through the cascaded MCU hierarchy, so that the grouping mode is more flexible and timely, and the use experience of users is promoted.
In any of the above technical solutions, preferably, the method further includes: a determination unit 210 configured to determine a terminal device other than the terminal device participating in the video conference as an external terminal device; a communication unit 212, configured to respond to a communication message sent by an external terminal device, and share the communication message to at least one specified terminal device in the flattened video conference structure according to a control instruction of a user of the terminal device.
In the technical scheme, in response to the communication message sent by the external terminal device, the communication message is shared to at least one designated terminal device in the flattened video conference structure according to the control instruction of the user of the terminal device, so that the communication message outside the video system can be shared to the video conference, and the use experience of the user is further improved.
In any of the above technical solutions, preferably, the receiving unit 202 is further configured to: receiving a communication message sent by appointed terminal equipment in a flattened video conference structure; the communication unit 212 is further configured to: and sending the communication message to at least one corresponding external terminal device according to the control instruction of the user of the terminal device.
In the technical scheme, by receiving the communication message sent by the appointed terminal device in the flattened video conference structure and sending the communication message to the corresponding at least one external terminal device according to the control instruction of the user of the terminal device, the content of the video conference can be sent to the external terminal device in time, and particularly the terminal device which is inconvenient to participate in the video conference.
In any of the above technical solutions, preferably, the grouping unit 204 is further configured to: and after grouping the terminal equipment, performing nested grouping on the terminal equipment in the group according to the attribute information.
In the technical scheme, after the terminal devices are grouped, the terminal devices in the group are nested and grouped according to the attribute information, namely, any group of the terminal devices can also comprise other groups, so that the technical scheme further enriches the modes of the video conference, and reduces the hardware cost compared with the traditional video conference.
Example three:
fig. 3 shows a schematic block diagram of a terminal device according to an embodiment of the invention.
As shown in fig. 3, a terminal device 300 according to an embodiment of the present invention includes the processing apparatus 200 of the video conference shown in fig. 2.
The terminal device 300 of the embodiment may be a cloud service or any terminal device participating in a video conference, and the specific steps are implemented by hardware such as a microprocessor, a CPU, a DSP, a single chip, and an embedded device, the receiving unit 202 and the communication unit 212 may include a general interface, an antenna, and the like, the grouping unit 204 and the determining unit 210 may include a memory, a communication channel regulation module, an encoder, a decoder, and the like, the creating unit 206 may include a user interaction interface, a voice sensor, a voice synthesizer, an image synthesizer, and the like, and the storage unit 208 may include a memory, a hard disk, a database, and the like.
Example four:
fig. 4 shows an architecture diagram of a processing scheme of a video conference in the prior art.
Fig. 5 shows an architectural diagram of a processing scheme for a video conference according to an embodiment of the invention.
Fig. 6 shows an architectural diagram of a processing scheme for a video conference according to another embodiment of the present invention.
As shown in fig. 4, in the existing video conference architecture, a core MCU 402 is usually required, the subordinate of the core MCU 402 is associated with second-stage MCUs (such as 404 and 406 shown in fig. 4) and a grouped terminal device cluster 402n, the second-stage MCUs are associated with third-stage MCUs (such as 408 and 410 shown in fig. 4) and a terminal device cluster (n represents the number of terminal devices and is greater than or equal to 0), wherein the second-stage MCU 404 is associated with a grouped video device cluster 404m, the second-stage MCU 406 is associated with a grouped video device cluster 406p, the third-stage MCU 408 is associated with a grouped video device cluster 408q, and the second-stage MCU 410 is associated with a grouped video device cluster 410s, n, m, p, q and s each represent the number of terminal devices in each group and is greater than or equal to 0.
As shown in fig. 5 and 6, a processing apparatus 500 for video conference according to an embodiment of the present invention includes: a protocol and channel control module 504, configured to regulate and control a communication mode and time synchronization of the terminal device; an audio synthesis module 506, configured to synthesize voice information of terminal devices in any group; a video synthesis module 508, configured to synthesize image information of terminal devices in any group; an encoding/decoding module 510 for performing encoding operations and/or decoding operations on video information; the conference system comprises a communication module 512 and a plurality of terminal device clusters 502n, 502m, 502p, 502q and 502s participating in a video conference, wherein the terminal device clusters 502n, 502m, 502p, 502q and 502s are distributed in a flattened manner and do not need a cascaded MCU architecture, and each terminal device can prompt video information to conference participants and receive a control instruction of a user interaction interface.
As shown in fig. 5, after logical grouping is performed according to the first grouping template, the terminal 5023 is used as a conference manager, and at the same time, a first grouping cluster 500A and a second grouping cluster 500B are formed, and voice information and video information are synthesized in each grouping cluster.
As shown in fig. 6, the first packet cluster 500A and the second packet cluster 500B require inter-nested grouping, i.e., the first packet cluster 500A and the second packet cluster 500B are further dynamically grouped internally according to another grouping template, the first packet cluster 500A includes a first packet sub-cluster 500A1 and a second packet sub-cluster 500A2, and the second packet cluster 500B includes a third packet sub-cluster 500B1 and a fourth packet sub-cluster 500B 2.
The conference after the logical grouping can work in a closed mode, namely, the audio data and the video data of the grouping conference are completely separated from other logical grouping conferences.
In addition, the conference after logical grouping can also work in an open mode, that is, the audio data and the video data of the grouping conference can be output to other groups, and the audio data and the video data of other grouping conferences can also be input into the current group to be used as the input data of an audio synthesizer and a video synthesizer.
Example five:
according to an embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed, performs the steps of: receiving attribute information uploaded by terminal equipment requesting for a video conference; grouping all the terminal devices participating in the video conference according to a preset grouping template and attribute information; and creating at least one voice synthesis process and/or at least one image synthesis process for the terminal equipment in any one of the grouped groups.
In the technical scheme, attribute information uploaded by terminal equipment requesting for carrying out the video conference is received, and all the terminal equipment participating in the video conference is grouped according to a preset grouping template and the attribute information, the method is a logical grouping scheme essentially, and cascaded MCUs are not required to be set for grouping control, so that a user is not required to carry out grouping operation and MCU hierarchy setting operation, the operation steps of the user are greatly simplified, in addition, because the MCU does not have a hierarchy relation, the problem of redundant data of a parent-level MCU is solved, and in the execution process of the video conference, the conference can be automatically grouped and divided according to the attribute information without pausing.
In addition, at least one voice synthesis process and/or at least one image synthesis process are created for the terminal equipment in any one of the grouped groups, the voice synthesis process can be packaged into an audio synthesizer, and the audio synthesizer is used for mixing multiple paths of voice data into one path of voice data, so that all participants in the group can simultaneously hear a plurality of speaking sounds in the group. The video composition process may be encapsulated as a video compositor that is configured to composite multiple video frames into a multi-split screen frame, thereby enabling all participants in the group to view multiple video frames within the group at the same time.
And finally, the frame number, the volume, the definition and the picture layout of the terminal equipment in the same group can be set according to the attribute information so as to distinguish the use effect of different terminal equipment in the same group in the video conference process.
Specifically, the attribute information mainly includes hardware information of the mobile terminal including an identification code of the mobile terminal, a device model, and a code of a communication standard protocol, and individual information of a user using the mobile terminal including a name, an age, a native place, a job number, a department, a region, and a title of the user.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: receiving a grouping time node of a video conference uploaded by a designated terminal device and a grouping template corresponding to any grouping time node; and pre-storing the grouping template as a preset grouping template, wherein the preset grouping template is used for dividing the information which accords with the same rule in the attribute information into one type according to the sequence indication of the grouping time nodes and dividing the terminal equipment corresponding to the information which is divided into one type into one group.
In the technical scheme, by receiving the packet time node of the video conference uploaded by the appointed terminal equipment, and a grouping template corresponding to any grouping time node, pre-storing the grouping template as a preset grouping template, setting the corresponding grouping time node according to a preset video conference process, grouping all the terminal devices participating in the video conference according to a preset grouping template when reaching the corresponding grouping time node, for example, after the first-stage conference is finished, detecting that an administrator or voice excitation triggers the grouping time node, then all the participated terminal devices are logically grouped according to a grouping template corresponding to the first preset grouping time node, the time when the free group discussion ends is taken as a second preset grouping time node, when detecting that the administrator or voice stimulus triggers the grouping time node again, all the participating terminal devices are merged into the same group.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: receiving grouping scene information of a video conference uploaded by appointed terminal equipment and a grouping condition corresponding to any grouping scene information; pre-storing the grouping condition as a preset grouping template; the preset grouping template is used for grouping the terminal devices corresponding to the information meeting the grouping condition into a group when the video information meeting the grouping condition in the video conference process is detected.
In the technical scheme, the grouping scene information of the video conference uploaded by the appointed terminal equipment and the grouping condition corresponding to any grouping scene information are received, the grouping condition is prestored as a preset grouping template, namely, another more flexible grouping mode is provided for logic grouping, and after all the terminal equipment participate in the video conference, an administrator can trigger the grouping condition at any time to carry out grouping, for example, conference participants in the same department are divided into the same group.
In any of the above technical solutions, preferably, grouping all terminal devices participating in the video conference according to a preset grouping template and attribute information includes: analyzing the attribute information to determine an attribute identifier of the terminal device and identifier information of a user using the terminal device to perform a video conference; recording the time and video information of the video conference in real time in the continuous process of the video conference; when the time of the video conference reaches the grouping time node, grouping the terminal devices corresponding to the information classified into one type into one group according to the rule corresponding to the grouping time node; or when detecting that the video information in the video conference process meets the grouping condition, grouping the terminal devices corresponding to the information meeting the grouping condition into a group.
According to the technical scheme, the time and the video information of the video conference are recorded in real time in the continuous process of the video conference, and when the fact that the time of the video conference reaches a grouping time node is detected, terminal equipment corresponding to information classified into one type is classified into one group according to rules corresponding to the grouping time node, or when the fact that the video information in the process of the video conference meets the grouping condition is detected, terminal equipment corresponding to information meeting the grouping condition is classified into one group, grouping operation steps of the video conference are simplified, a cascade MCU does not need to be arranged for grouping, and redundant data and network data interaction pressure in the MCU is reduced.
In any of the above technical solutions, preferably, creating at least one speech synthesis process and/or at least one video synthesis process for the terminal devices in any of the grouped groups specifically includes: after grouping of terminal equipment participating in the video conference is completed, detecting whether a specified voice excitation signal or a specified image excitation signal or a specified control instruction is received or not; after detecting that a terminal device of any group generates a specified voice excitation signal or a specified video excitation signal or a specified control instruction, creating at least one voice synthesis process and/or at least one video synthesis process for the terminal device in any group, wherein the voice synthesis process is used for coding and synthesizing voice resources generated by all terminal devices in any group into a voice signal, and the video synthesis process is used for coding and synthesizing video resources generated by all terminal devices in any group into a video signal.
In the technical scheme, after a terminal device of any group is detected to generate a specified voice excitation signal or a specified video excitation signal or a specified control instruction, at least one voice synthesis process and/or at least one video synthesis process are/is created for the terminal device in any group, namely, the voice synthesis and the video synthesis are respectively carried out on the video information in each group, so that the communication quality and the user experience of the video conference are improved while the transmission of redundant data is reduced.
In any of the above technical solutions, preferably, before receiving attribute information uploaded by a terminal device requesting a video conference, the method further includes: and accessing all the terminal devices participating in the video conference by adopting a distributed technology to form a flat video conference structure, wherein the terminal devices in the flat video conference structure can share and transmit communication messages, and the communication messages comprise any one of audio information, video information and text information.
In the technical scheme, all terminal devices participating in the video conference are accessed by adopting a distributed technology, and communication messages can be shared and transmitted among the terminal devices in the flattened video conference structure, on one hand, each terminal device can work as an independent processor to share the operation pressure in a video system (comprising the server and a plurality of terminal devices participating in the video conference), and on the other hand, the flattened video conference structure breaks through the cascaded MCU hierarchy, so that the grouping mode is more flexible and timely, and the use experience of users is promoted.
In any of the above technical solutions, preferably, the method further includes: determining terminal equipment except participating in the video conference as external terminal equipment; and responding to the communication message sent by the external terminal equipment, and sharing the communication message to at least one specified terminal equipment in the flattened video conference structure according to a control instruction of a user of the terminal equipment.
In the technical scheme, in response to the communication message sent by the external terminal device, the communication message is shared to at least one designated terminal device in the flattened video conference structure according to the control instruction of the user of the terminal device, so that the communication message outside the video system can be shared to the video conference, and the use experience of the user is further improved.
In any of the above technical solutions, preferably, the method further includes: receiving a communication message sent by appointed terminal equipment in a flattened video conference structure; and sending the communication message to at least one corresponding external terminal device according to the control instruction of the user of the terminal device.
In the technical scheme, by receiving the communication message sent by the appointed terminal device in the flattened video conference structure and sending the communication message to the corresponding at least one external terminal device according to the control instruction of the user of the terminal device, the content of the video conference can be sent to the external terminal device in time, and particularly the terminal device which is inconvenient to participate in the video conference.
In any of the above technical solutions, preferably, the method further includes: and after grouping the terminal equipment, performing nested grouping on the terminal equipment in the group according to the attribute information.
In the technical scheme, after the terminal devices are grouped, the terminal devices in the group are nested and grouped according to the attribute information, namely, any group of the terminal devices can also comprise other groups, so that the technical scheme further enriches the modes of the video conference, and reduces the hardware cost compared with the traditional video conference.
The technical scheme of the present invention is described in detail above with reference to the accompanying drawings, and the present invention provides a method, an apparatus, a terminal device and a computer readable storage medium for processing a video conference, wherein attribute information uploaded by a terminal device requesting a video conference is received, and all terminal devices participating in the video conference are grouped according to a preset grouping template and the attribute information, which is substantially a logical grouping scheme, and it is not necessary to set cascaded MCUs for grouping control, so that it is not necessary for a user to perform grouping operation and set MCU hierarchy operation, thereby greatly simplifying operation steps of the user.
The steps in the method of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of the invention can be merged, divided and deleted according to actual needs.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A method for processing a video conference, comprising:
receiving attribute information uploaded by terminal equipment requesting for a video conference;
grouping all the terminal devices participating in the video conference according to a preset grouping template and the attribute information;
creating at least one voice synthesis process and/or at least one image synthesis process for the terminal equipment in any one of the grouped groups;
receiving a grouping time node of a video conference uploaded by a designated terminal device and a grouping template corresponding to any one grouping time node;
pre-storing the grouping template as the preset grouping template,
the preset grouping template is used for dividing the information which accords with the same rule in the attribute information into one type according to the sequence indication of the grouping time nodes, and is used for dividing the terminal equipment corresponding to the information which is divided into one type into one group;
the attribute information includes hardware information of the terminal device and individual information of a user using the terminal device.
2. The method for processing the video conference according to claim 1, further comprising, before receiving the attribute information uploaded by the terminal device requesting the video conference:
receiving grouping scene information of a video conference uploaded by appointed terminal equipment and a grouping condition corresponding to any one of the grouping scene information;
pre-storing the grouping condition as the preset grouping template;
the preset grouping template is used for grouping terminal equipment corresponding to the information meeting the grouping condition into a group when the video information meeting the grouping condition in the video conference process is detected.
3. The method for processing the video conference according to claim 2, wherein all the terminal devices participating in the video conference are grouped according to a preset grouping template and the attribute information, and specifically comprises:
analyzing the attribute information to determine the attribute identification of the terminal equipment and the identification information of a user using the terminal equipment to carry out a video conference;
recording the time and video information of the video conference in real time in the continuous process of the video conference;
when the video conference time is detected to reach the grouping time node, the terminal devices corresponding to the information classified into one type are grouped into one group according to the rule corresponding to the grouping time node; or
And when detecting that the video information in the video conference process meets the grouping condition, grouping the terminal devices corresponding to the information meeting the grouping condition into a group.
4. The method according to claim 1 or 2, wherein creating at least one speech synthesis process and/or at least one video synthesis process for the terminal devices in any one of the grouped groups specifically comprises:
after grouping of the terminal devices participating in the video conference is completed, detecting whether a specified voice excitation signal or a specified image excitation signal or a specified control instruction is received or not;
after detecting that one terminal device of any group generates the specified voice excitation signal or the specified video excitation signal or the specified control instruction, creating at least one voice synthesis process and/or at least one video synthesis process for the terminal devices in any group,
the video synthesis process is used for coding and synthesizing video resources generated by all the terminal devices in any group into a video signal.
5. The method for processing the video conference according to claim 1 or 2, wherein before receiving the attribute information uploaded by the terminal device requesting the video conference, the method further comprises:
accessing all the terminal devices participating in the video conference by adopting a distributed technology to form a flattened video conference structure,
the terminal devices in the flattened video conference structure can share and transmit communication messages, wherein the communication messages comprise any one of audio information, video information and text information.
6. The method for processing video conference according to claim 5, further comprising:
determining terminal equipment except the terminal equipment participating in the video conference as external terminal equipment;
and responding to the communication message sent by the external terminal equipment, and sharing the communication message to at least one specified terminal equipment in the flattened video conference structure according to a control instruction of a user of the terminal equipment.
7. The method for processing video conference according to claim 6, further comprising:
receiving a communication message sent by appointed terminal equipment in the flattened video conference structure;
and sending the communication message to at least one corresponding external terminal device according to a control instruction of the user of the terminal device.
8. The video conference processing method according to claim 1 or 2, further comprising:
and after grouping the terminal equipment, nesting and grouping the terminal equipment in the group according to the attribute information.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the processing method of a video conference according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811615835.2A CN109688365B (en) | 2018-12-27 | 2018-12-27 | Video conference processing method and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811615835.2A CN109688365B (en) | 2018-12-27 | 2018-12-27 | Video conference processing method and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109688365A CN109688365A (en) | 2019-04-26 |
CN109688365B true CN109688365B (en) | 2021-02-19 |
Family
ID=66190681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811615835.2A Active CN109688365B (en) | 2018-12-27 | 2018-12-27 | Video conference processing method and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109688365B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110933359B (en) * | 2020-01-02 | 2021-07-02 | 随锐科技集团股份有限公司 | Intelligent video conference layout method and device and computer readable storage medium |
CN111654661B (en) * | 2020-06-17 | 2022-03-01 | 深圳康佳电子科技有限公司 | Video conference annotation method, video conference server and storage medium |
CN113891032A (en) * | 2021-03-17 | 2022-01-04 | 广州市保伦电子有限公司 | Method and server for automatically switching layout of video conference terminal of mobile terminal |
CN112969039B (en) * | 2021-05-18 | 2021-08-03 | 浙江华创视讯科技有限公司 | Video fusion method, device and equipment and readable storage medium |
CN113934336B (en) * | 2021-12-16 | 2022-03-29 | 游密科技(深圳)有限公司 | Video conference packet interaction method and device, computer equipment and storage medium |
CN115914542A (en) * | 2022-10-20 | 2023-04-04 | 海南乾唐视联信息技术有限公司 | Conference processing method and device, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0537933A (en) * | 1991-08-01 | 1993-02-12 | Nec Corp | Inter-multiplace video conference system |
WO2008078555A1 (en) * | 2006-12-22 | 2008-07-03 | Nec Corporation | Conference control method, system, and program |
CN101370114A (en) * | 2008-09-28 | 2009-02-18 | 深圳华为通信技术有限公司 | Video and audio processing method, multi-point control unit and video conference system |
CN101984662A (en) * | 2010-12-02 | 2011-03-09 | 上海华平信息技术股份有限公司 | Detachable video conference system and method |
CN102111603A (en) * | 2009-12-23 | 2011-06-29 | 中国移动通信集团公司 | Method for realizing sub-conference in IMS video conference, and device and system thereof |
CN105611219A (en) * | 2014-11-24 | 2016-05-25 | 中兴通讯股份有限公司 | Method and device for processing video conference |
CN105635629A (en) * | 2014-10-31 | 2016-06-01 | 鸿富锦精密工业(武汉)有限公司 | Video conference device and method |
CN106101603A (en) * | 2016-06-23 | 2016-11-09 | 广州派诺网络技术有限公司 | A kind of flattening video communication method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020122112A1 (en) * | 1998-04-10 | 2002-09-05 | Raoul Mallart | Group-wise video conferencing uses 3d-graphics model of broadcast event |
CN102811205A (en) * | 2011-06-02 | 2012-12-05 | 中兴通讯股份有限公司 | Method and system for realizing sub-conference function by application server |
US9582496B2 (en) * | 2014-11-03 | 2017-02-28 | International Business Machines Corporation | Facilitating a meeting using graphical text analysis |
CN108063672B (en) * | 2016-11-07 | 2019-03-01 | 视联动力信息技术股份有限公司 | A kind of management method and device of video conference terminal |
-
2018
- 2018-12-27 CN CN201811615835.2A patent/CN109688365B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0537933A (en) * | 1991-08-01 | 1993-02-12 | Nec Corp | Inter-multiplace video conference system |
WO2008078555A1 (en) * | 2006-12-22 | 2008-07-03 | Nec Corporation | Conference control method, system, and program |
CN101370114A (en) * | 2008-09-28 | 2009-02-18 | 深圳华为通信技术有限公司 | Video and audio processing method, multi-point control unit and video conference system |
CN102111603A (en) * | 2009-12-23 | 2011-06-29 | 中国移动通信集团公司 | Method for realizing sub-conference in IMS video conference, and device and system thereof |
CN101984662A (en) * | 2010-12-02 | 2011-03-09 | 上海华平信息技术股份有限公司 | Detachable video conference system and method |
CN105635629A (en) * | 2014-10-31 | 2016-06-01 | 鸿富锦精密工业(武汉)有限公司 | Video conference device and method |
CN105611219A (en) * | 2014-11-24 | 2016-05-25 | 中兴通讯股份有限公司 | Method and device for processing video conference |
CN106101603A (en) * | 2016-06-23 | 2016-11-09 | 广州派诺网络技术有限公司 | A kind of flattening video communication method |
Also Published As
Publication number | Publication date |
---|---|
CN109688365A (en) | 2019-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109688365B (en) | Video conference processing method and computer-readable storage medium | |
KR101617906B1 (en) | Video conferencing subscription using multiple bit rate streams | |
CN109640029B (en) | Method and device for displaying video stream on wall | |
CN105072143A (en) | Interaction system for intelligent robot and client based on artificial intelligence | |
CN101420316B (en) | Video distribution system and video relay device | |
CN110049271B (en) | Video networking conference information display method and device | |
CN106161814A (en) | The sound mixing method of a kind of Multi-Party Conference and device | |
CN108924582A (en) | Video recording method, computer readable storage medium and recording and broadcasting system | |
CN110536100B (en) | Video networking conference recording method and system | |
WO2016074326A1 (en) | Channel switching method, apparatus and system | |
WO2021143043A1 (en) | Multi-person instant messaging method, system, apparatus and electronic device | |
CN111447391A (en) | Conference data synchronization method and device, computer equipment and storage medium | |
CN111327868B (en) | Method, terminal, server, equipment and medium for setting conference speaking party roles | |
CN111541905B (en) | Live broadcast method and device, computer equipment and storage medium | |
US20190182304A1 (en) | Universal messaging protocol for limited payload size | |
CN113542660A (en) | Method, system and storage medium for realizing conference multi-picture high-definition display | |
CN110457575B (en) | File pushing method, device and storage medium | |
US20070220162A1 (en) | Media processing abstraction model | |
JP7377352B2 (en) | Multi-member instant messaging method, system, device, electronic device, and computer program | |
CN110381285B (en) | Conference initiating method and device | |
CN111405230B (en) | Conference information processing method and device, electronic equipment and storage medium | |
CN110958419B (en) | Video networking conference processing method and device, electronic equipment and storage medium | |
CN109963107B (en) | Audio and video data display method and system | |
CN111885351A (en) | Screen display method and device, terminal equipment and storage medium | |
CN110557595A (en) | Method and device for accessing mobile terminal to video conference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |