CN117560464B - Multi-platform video conference method and system - Google Patents

Multi-platform video conference method and system Download PDF

Info

Publication number
CN117560464B
CN117560464B CN202410033903.3A CN202410033903A CN117560464B CN 117560464 B CN117560464 B CN 117560464B CN 202410033903 A CN202410033903 A CN 202410033903A CN 117560464 B CN117560464 B CN 117560464B
Authority
CN
China
Prior art keywords
user
coefficient
threshold
user equipment
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410033903.3A
Other languages
Chinese (zh)
Other versions
CN117560464A (en
Inventor
谢志强
吴强
熊亚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cloudroom Technology Co ltd
Original Assignee
Shenzhen Cloudroom Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cloudroom Technology Co ltd filed Critical Shenzhen Cloudroom Technology Co ltd
Priority to CN202410033903.3A priority Critical patent/CN117560464B/en
Publication of CN117560464A publication Critical patent/CN117560464A/en
Application granted granted Critical
Publication of CN117560464B publication Critical patent/CN117560464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/752Media network packet handling adapting media to network capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/756Media network packet handling adapting media to device capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a multi-platform video conference method and a system, which relate to the technical field of video communication optimization. The second user environment acquisition unit is introduced, and the system can sense the current environment information of the user, including factors such as light intensity, noise and the like. Through the comprehensive processing of the second analysis unit, video exposure and noise interference are effectively controlled, and conference quality is further optimized. This enables the user to enjoy a clear, high quality video conference regardless of the environment in which they are located. The first configuration unit generates a dynamic configuration strategy according to the network stability coefficient and the device compatibility coefficient of the user device, so as to realize multi-platform adaptation.

Description

Multi-platform video conference method and system
Technical Field
The invention relates to the technical field of video communication optimization, in particular to a multi-platform video conference method and system.
Background
With rapid development of technology, video conferences are widely used in the fields of business, education, medical treatment and the like as an efficient remote collaboration mode. Due to the diversity of user equipment and the diversity of network environments, the traditional system is difficult to monitor and adapt to the performance and network condition change of various devices in real time, so that problems of picture blocking, asynchronous audio and video and the like occur in a video conference, and the user experience is affected.
The current environment of the platform equipment used by the user is not effectively monitored and adjusted in the traditional technology, and the influence of factors such as light intensity, noise and the like is included. This results in that in different environments, the user may face problems such as underexposure of the picture, noise interference, etc., which affect the quality of the conference.
Disclosure of Invention
(One) solving the technical problems
Aiming at the defects of the prior art, the invention provides a multi-platform video conference method and a multi-platform video conference system, which are used for solving the problems in the background art.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme: a multi-platform video conference system comprises a first user data acquisition unit, a first analysis unit, a first configuration unit, a second user environment acquisition unit, a second analysis unit and a second configuration unit;
the first user data acquisition unit is used for monitoring the performance and network condition of the user equipment in real time and establishing a user data set;
The first analysis unit is used for carrying out analysis and calculation according to a user data set to obtain: a user equipment network stability coefficient Wdx and a device compatibility coefficient JRx;
the first configuration unit compares the network stability coefficient Wdx of the user equipment with a first stability threshold X and compares the network stability coefficient JRx of the user equipment with a second compatibility threshold Q to automatically generate a corresponding configuration strategy;
The second user environment acquisition unit is used for positioning the user equipment region, prompting a user to start an ambient light sensor in the equipment camera, acquiring the light intensity of the current environment of the user, acquiring the environment background when the user inputs voice, and acquiring the environment noise data; establishing a user environment data set;
The second analysis unit is used for carrying out analysis and calculation according to the user environment data set to obtain: video exposure coefficient BGx, noise impact coefficient Zyx, and user liveness Hyd; correlating the video exposure coefficient BGx, the noise influence coefficient Zyx and the user activity Hyd to obtain a comprehensive fine tuning coefficient Wtxs;
the second configuration unit is configured to compare the integrated trimming coefficient Wtxs with a third preset threshold R to obtain a corresponding trimming policy.
Preferably, the first user data acquisition unit comprises a memory capacity acquisition module, a processor performance acquisition module and a network data acquisition module;
the memory capacity acquisition module is used for acquiring the actual memory capacity NCRL of the user equipment, and takes GB as a unit;
the processor performance acquisition module is used for acquiring a processor main frequency value ZP and a core number HXS of the user equipment;
The network data acquisition module is used for acquiring an uploading bandwidth SCDK, a downloading bandwidth XZDK, a round trip time RTT, a response time XT and a packet loss rate DBL in the running process of the user equipment.
Preferably, the first analysis unit comprises a user data set processing module, a network stability calculation module and a device compatibility calculation module;
The user data set processing module is used for processing the user data set and comprises data cleaning, format standardization, removal processing, timestamp processing and data normalization;
The network stability calculation module is configured to extract an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT, and a packet loss rate DBL in the user data set, and generate a user equipment network stability coefficient Wdx according to the following formula:
wherein K1, K2, K3, K4 and K5 represent weight values of an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT and a packet loss rate DBL, and k1+k2+k3+k4+k5=1.0;
The device compatibility calculating module is configured to extract an actual memory capacity NCRL in the user data set, a processor main frequency value ZP, and a core number HXS, and generate a device compatibility coefficient JRx according to the following formula:
Where K6, K7 and K8 represent the weights of the actual memory capacity NCRL, the processor main frequency value ZP and the core number HXS, and k6+k7+k8=1.0.
Preferably, the first configuration unit is configured to generate a corresponding configuration policy according to the user equipment network stability coefficient Wdx and the device compatibility coefficient JRx, including the following procedures:
Comparing the user equipment network stability coefficient Wdx with a first stability threshold X, comparing the equipment compatibility coefficient JRx with a second compatibility threshold Q when the user equipment network stability coefficient Wdx is more than or equal to the first stability threshold X, and automatically generating a first configuration strategy when the equipment compatibility coefficient JRx is more than or equal to the second compatibility threshold Q; the first configuration strategy comprises configuring video resolution to be 4K-8K (UHD) and frame rate to be 120 frames/second;
when the network stability coefficient Wdx of the user equipment is smaller than the first stability threshold X, but the equipment compatibility coefficient JRx is larger than or equal to the second compatibility threshold Q, automatically generating a second configuration strategy; the second configuration strategy comprises the steps of configuring video resolution to be 720p and configuring video frame rate to be 60 frames/second;
When the network stability coefficient Wdx of the user equipment is more than or equal to the first stability threshold X, but the equipment compatibility coefficient JRx is less than the second compatibility threshold Q, automatically generating a third configuration strategy; the third configuration strategy comprises the steps of configuring the video resolution to be 1080p, and configuring the frame rate to be 60 frames/second;
When the network stability coefficient Wdx of the user equipment is smaller than the first stability threshold X and the equipment compatibility coefficient JRx of the user equipment is smaller than the second compatibility threshold Q, automatically generating a fourth configuration strategy; the fourth configuration strategy includes configuring the video resolution to be 480p and the frame rate to be 30 frames/second.
Preferably, the second user environment collection unit is configured to collect an environment data set in real time, and the specific steps include:
S21, acquiring regional information of the user equipment by using equipment positioning service, and realizing acquisition by calling a positioning interface of the equipment;
S22, a prompt is sent to a user, the user is requested to allow the application program to access the ambient light sensor of the device, and the application program sends a dialog box for notifying or popping up the request authority; once the user allows access to the ambient light sensor, the real-time light intensity value gxqdz and the user device camera aperture value gqz of the current environment are collected, and are achieved by calling the ambient light sensor interface of the device;
s23, when a user inputs voice, background information of the current environment is collected at the same time, a small section of audio is recorded to capture sound and sound characteristics in the environment, and a microphone of the voice input device is used for obtaining the environment audio; capturing noise in the environment using a microphone of the device while the speech is input; and analyzing to obtain a real-time noise value Zyz and a voice signal value Xhz;
S24, the collected user area, the real-time light intensity value gxqdz, the aperture value gqz of the user equipment camera, the real-time noise value Zyz and the voice signal value Xhz are sorted and stored, and a user environment data set is established.
Preferably, the second analysis unit comprises a video exposure analysis module, a noise influence analysis module and a user activity calculation module;
The video exposure analysis module is used for extracting a real-time light intensity value gxqdz in a user environment dataset and a camera aperture value gqz of user equipment, and generating a video exposure coefficient BGx through the following formula after dimensionless processing;
wherein W1 and W2 are represented as a proportionality coefficient of the real-time light intensity value gxqdz and the user equipment camera aperture value gqz, where w1=0.6 and w2=0.4;
the noise influence analysis module is used for extracting a real-time noise value Zyz and a voice signal value Xhz in the user environment data set, and generating a noise influence coefficient Zyx by the following formula after dimensionless processing:
where W3 and W4 are represented as scaling coefficients of the real-time noise value Zyz and the speech signal value Xhz, where w3=0.6 and w4=0.4;
the user liveness calculation module is used for collecting user operation frequency cPL, conference use duration SJ and voice input frequency yPL in the platform video conference process, and generating user liveness Hyd according to the following formula after dimensionless processing:
In the formula, W5, W6, and W7 are expressed as weight values of the user operation frequency cPL, the conference use period SJ, and the voice input frequency yPL, and w5+w6+w7=1.0.
Preferably, the second analysis unit further includes an associating module, configured to associate the video exposure coefficient BGx, the noise influence coefficient Zyx, and the user activity Hyd, and generate the integrated fine adjustment coefficient Wtxs according to the following formula;
in the method, in the process of the invention, Weight values respectively representing the video exposure coefficient BGx, the noise influence coefficient Zyx, and the user activity Hyd, andC is denoted as a correction constant.
Preferably, the second configuration unit is configured to compare the integrated trimming coefficient Wtxs with a third preset threshold R, where the third preset threshold R includes a threshold R1, a threshold R2, and a threshold R3, and obtain a corresponding trimming policy, and the method includes:
when the comprehensive fine tuning coefficient Wtxs is smaller than the threshold R1, a first fine tuning strategy is obtained, wherein the method comprises the steps of adjusting the exposure time of the camera to be more than the current 30%, and increasing the aperture size of the camera to f/2.8 to f/5.6 so as to balance the depth of field and the light entering quantity; when the comprehensive fine adjustment coefficient Wtxs is more than or equal to the threshold R1, the current light is qualified, and adjustment is not needed;
When the integrated trimming coefficient Wtxs is less than the threshold value R2, a second trimming strategy is obtained, including adjusting the sensitivity of the microphone to control reducing the capture of ambient noise; when the comprehensive fine adjustment coefficient Wtxs is more than or equal to the threshold R2, the current noise is qualified, and adjustment is not needed;
When the comprehensive fine tuning coefficient Wtxs is more than or equal to the threshold R3, a third fine tuning strategy is obtained, wherein the third fine tuning strategy comprises; more than 30% of computing resources are allocated for video encoding and decoding, response speed is improved, and when the comprehensive fine tuning coefficient Wtxs is smaller than the threshold R3, the current activity is normal, and adjustment is not needed.
Preferably, the system further comprises a real-time sharing unit for providing real-time text chat, file sharing and screen sharing functions for multiple users during the video conference, so as to facilitate collaboration and information exchange during the conference.
A multi-platform video conference method comprises the following steps,
Starting a video conference system, initializing each module, monitoring the performance and network condition of user equipment in real time through a first user data acquisition unit, and establishing a user data set; the first analysis unit analyzes and calculates according to the user data set to obtain a user equipment network stability coefficient Wdx and an equipment compatibility coefficient JRx; the first configuration unit compares the network stability coefficient Wdx of the user equipment with a first stability threshold X and compares the network stability coefficient JRx of the user equipment with a second compatibility threshold Q to automatically generate a corresponding configuration strategy;
step two, performing first adjustment on the user according to the corresponding configuration strategy;
step three, a second user environment acquisition unit acquires an environment data set in real time, wherein the environment data set comprises user equipment areas, light intensity, environment background and noise data; the second analysis unit calculates a video exposure coefficient BGx, a noise influence coefficient Zyx and user activity Hyd according to the user environment dataset; the correlation module correlates BGx, zyx and Hyd to obtain a comprehensive fine tuning coefficient Wtxs; the second configuration unit compares the comprehensive fine tuning coefficient Wtxs with a third preset threshold R to obtain a corresponding fine tuning strategy;
step four, performing secondary adjustment on the user according to the fine adjustment strategy;
And fifthly, the real-time sharing unit provides real-time text chat, file sharing and screen sharing functions.
(III) beneficial effects
The invention provides a multi-platform video conference method and a system. The beneficial effects are as follows:
(1) The traditional system is difficult to monitor and adapt to the performance and network condition change of various devices in real time, and the problems of picture blocking, asynchronous audio and video and the like in the video conference are easy to occur. By introducing the first user data acquisition unit, the first analysis unit and the first configuration unit, the system can monitor the performance and the network condition of the user equipment in real time and carry out adaptive adjustment according to the dynamically generated configuration strategy, thereby improving the real-time performance of the video conference.
(2) The second user environment acquisition unit acquires an environment data set in real time, wherein the environment data set comprises regional information, light intensity, noise and the like. Through the second analysis unit and the configuration unit, the system can calculate the video exposure coefficient, the noise influence coefficient and the user activity, so that comprehensive fine adjustment is performed. This helps optimize video exposure, reduce noise interference, and enhance the user experience.
(3) The current environment in which the user uses the platform device is not effectively monitored and adjusted in the conventional technology. The system is introduced with a second user environment acquisition unit, a second analysis unit and a second configuration unit, and can acquire user environment data in real time, including factors such as light intensity and noise. Through comprehensive fine tuning, the system optimizes video exposure, reduces noise interference and improves the quality of the video conference in different environments.
(4) Conventional systems may have difficulty accommodating different devices and network environments, resulting in user experience differences. By dynamically generating the configuration strategy by the first configuration unit, the system can realize multi-platform adaptation according to the network stability and the equipment compatibility of the user equipment, and video conference experience which meets the requirements of users better is provided.
Drawings
FIG. 1 is a block diagram of a multi-platform videoconference system of the present invention;
Fig. 2 is a schematic diagram of steps of a multi-platform video conference method according to the present invention.
In the figure: 1. a first user data acquisition unit; 11. a memory capacity acquisition module; 12. a processor performance acquisition module; 13. a network data acquisition module; 2. a first analysis unit; 21. a user data set processing module; 22. a network stability calculation module; 23. a device compatibility calculation module; 3. a first configuration unit; 4. a second user environment acquisition unit; 5. a second analysis unit; 51. a video exposure analysis module; 52. a noise impact analysis module; 53. a user liveness calculation module; 54. an associated module; 6. a second configuration unit; 7. and a real-time sharing unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The invention provides a multi-platform video conference system, please refer to fig. 1, which comprises a first user data acquisition unit 1, a first analysis unit 2, a first configuration unit 3, a second user environment acquisition unit 4, a second analysis unit 5 and a second configuration unit 6;
the first user data acquisition unit 1 is used for monitoring the performance and network condition of user equipment in real time and establishing a user data set;
The first analysis unit 2 is configured to perform analysis and calculation according to a user data set to obtain: a user equipment network stability coefficient Wdx and a device compatibility coefficient JRx;
The first configuration unit 3 is configured to compare the network stability coefficient Wdx of the user equipment with a first stability threshold X, and compare the network stability coefficient JRx of the user equipment with a second compatibility threshold Q, so as to automatically generate a corresponding configuration policy;
the second user environment collection unit 4 is used for locating the user equipment area, prompting the user to start an ambient light sensor in the equipment camera, obtaining the light intensity of the current environment of the user, collecting the environment background when the user inputs voice, and collecting the environment noise data; establishing a user environment data set;
The second analysis unit 5 is configured to perform analysis and calculation according to a user environment data set to obtain: video exposure coefficient BGx, noise impact coefficient Zyx, and user liveness Hyd; correlating the video exposure coefficient BGx, the noise influence coefficient Zyx and the user activity Hyd to obtain a comprehensive fine tuning coefficient Wtxs;
The second configuration unit 6 is configured to compare the integrated trimming coefficient Wtxs with a third preset threshold R, so as to obtain a corresponding trimming policy.
In this embodiment, the system can monitor the performance and network condition of the user equipment in real time through the first user data acquisition unit 1, and establish a user data set. The first analysis unit 2 performs analysis and calculation according to the data set to obtain a user equipment network stability coefficient Wdx and a device compatibility coefficient JRx. The system can adapt to the performance and network change of various devices in real time, and the stability and instantaneity of the system are improved. The first configuration unit compares the network stability coefficient and the device compatibility coefficient of the user device with a preset threshold value and automatically generates a corresponding configuration strategy. Therefore, the system can intelligently adjust the configuration of the video conference according to actual conditions, and can ensure that good conference experience can be provided under different equipment and network environments. The second user environment acquisition unit 4 acquires environment data including factors such as light intensity and noise by locating the user equipment region. The second analysis unit 5 generates a video exposure coefficient BGx, a noise influence coefficient Zyx and user liveness Hyd according to the data, and correlates the video exposure coefficient BGx, the noise influence coefficient Zyx and the user liveness Hyd to obtain a comprehensive fine tuning coefficient Wtxs, so that intelligent monitoring and analysis of a user environment are realized. The second configuration unit 6 generates a corresponding fine tuning strategy by comparing the integrated fine tuning coefficient with a preset threshold. The method comprises the steps of adjusting parameters such as camera exposure time, aperture size, microphone sensitivity and the like so as to optimize video exposure, reduce noise interference and improve conference experience of users in different environments. Through comprehensive monitoring and intelligent adjustment of user equipment performance, network conditions and environmental factors, the system can remarkably improve user experience of a multi-platform video conference, reduce problems of picture blocking, asynchronous audio and video and the like, and therefore provide a more efficient remote collaboration mode in the fields of business, education, medical treatment and the like.
Example 2
This embodiment is explained in embodiment 1, referring to fig. 1, specifically, the first user data acquisition unit 1 includes a memory capacity acquisition module 11, a processor performance acquisition module 12, and a network data acquisition module 13;
The memory capacity acquisition module 11 is configured to acquire an actual memory capacity NCRL of the user equipment, and takes GB as a unit; this allows the system to more finely understand the memory condition of the user device, thereby better providing it with personalized configurations and optimizations.
The processor performance acquisition module 12 is used for acquiring a processor main frequency value ZP and a core number HXS of the user equipment; this facilitates intelligent adjustment of the system according to the computing power of the device, providing a smoother videoconferencing experience.
The network data collection module 13 is configured to collect an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT, and a packet loss rate DBL during operation of the user equipment. The system can monitor the network condition in real time, so that the network fluctuation can be better dealt with, and the stability of the video conference is ensured.
In this embodiment, the system can comprehensively monitor the memory capacity, the processor dominant frequency value, the core number and the network operation state of the user equipment through the memory capacity acquisition module, the processor performance acquisition module and the network data acquisition module. This helps the system to fully understand the hardware performance and network conditions of the user device.
Example 3
This embodiment is explained in embodiment 1, referring to fig. 1, specifically, the first analysis unit 2 includes a user data set processing module 21, a network stability calculating module 22, and a device compatibility calculating module 23;
The user data set processing module 21 is configured to process a user data set, including data cleaning, format normalization, removal processing, timestamp processing, and data normalization;
The network stability calculation module 22 is configured to extract an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT, and a packet loss rate DBL in the user data set, and generate a user equipment network stability coefficient Wdx according to the following formula:
wherein K1, K2, K3, K4 and K5 represent weight values of an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT and a packet loss rate DBL, and k1+k2+k3+k4+k5=1.0;
The device compatibility calculating module 23 is configured to extract the actual memory capacity NCRL, the processor dominant frequency value ZP and the core number HXS in the user data set, and generate a device compatibility coefficient JRx according to the following formula:
Where K6, K7 and K8 represent the weights of the actual memory capacity NCRL, the processor main frequency value ZP and the core number HXS, and k6+k7+k8=1.0.
Specific data examples:
Assuming a device access, the following data is collected:
Upload bandwidth (SCDK): 2 Mbps; download bandwidth (XZDK): 5 Mbps; round Trip Time (RTT): 30 ms; response time (XT): 20 ms; packet loss rate (DBL): 1%; actual memory capacity (NCRL): 8 GB;
processor dominant frequency value (ZP): 2.5 GHz; core number (HXS): 4, a step of;
Meanwhile, the weight values are set as follows:
K1 (upload bandwidth weight): 0.2; k2 (download bandwidth weight): 0.3; k3 (round trip time weight): 0.1;
K4 (response time weight): 0.2; k5 (packet loss ratio weight): 0.2; k6 (memory capacity weight): 0.4;
K7 (dominant frequency value weight): 0.3; k8 (core number weight): 0.3;
Then, according to the formula provided:
Substituting specific numerical values into a formula to calculate:
=0.2×2+0.3×5+0.1×30+0.2×20+0.2×1Wdx=0.2×2+0.3×5+0.1×30+0.2×20+0.2×1/1;
Wdx=0.4+1.5+3+4+0.2/1;
Wdx=9.1;
The embodiment substitutes specific numerical values into a formula to calculate:
JRx=0.4×8+0.3×2.5+0.3×4/1;
JRx=3.2+0.75+1.2;
JRx=5.15。
Example 4
In this embodiment, as explained in embodiment 1, referring to fig. 1, specifically, the first configuration unit 3 is configured to generate a corresponding configuration policy according to the network stability coefficient Wdx and the device compatibility coefficient JRx of the ue, which includes the following procedures:
Comparing the user equipment network stability coefficient Wdx with a first stability threshold X, comparing the equipment compatibility coefficient JRx with a second compatibility threshold Q when the user equipment network stability coefficient Wdx is more than or equal to the first stability threshold X, and automatically generating a first configuration strategy when the equipment compatibility coefficient JRx is more than or equal to the second compatibility threshold Q; the first configuration strategy comprises the steps of configuring the video resolution to be 4K-8KUHD, and configuring the frame rate to be 120 frames/second;
when the network stability coefficient Wdx of the user equipment is smaller than the first stability threshold X, but the equipment compatibility coefficient JRx is larger than or equal to the second compatibility threshold Q, automatically generating a second configuration strategy; the second configuration strategy comprises the steps of configuring video resolution to be 720p and configuring video frame rate to be 60 frames/second;
When the network stability coefficient Wdx of the user equipment is more than or equal to the first stability threshold X, but the equipment compatibility coefficient JRx is less than the second compatibility threshold Q, automatically generating a third configuration strategy; the third configuration strategy comprises the steps of configuring the video resolution to be 1080p, and configuring the frame rate to be 60 frames/second;
When the network stability coefficient Wdx of the user equipment is smaller than the first stability threshold X and the equipment compatibility coefficient JRx of the user equipment is smaller than the second compatibility threshold Q, automatically generating a fourth configuration strategy; the fourth configuration strategy includes configuring the video resolution to be 480p and the frame rate to be 30 frames/second.
Data example, assume that the set threshold is as follows:
The first stability threshold X is 8; the second compatible threshold Q is 5.
Wdx =9.1 as calculated in example 3; JRx = 5.15;
Now make the decision:
Wdx is more than or equal to X: 9.1 Not less than 8 (meeting);
JRx is more than or equal to Q: 5.15 Not less than 5 (meeting);
according to the configuration policy procedure, this will generate a first configuration policy, i.e. a configuration video resolution of 4K-8K UHD, a frame rate of 120 frames/sec. Thus, according to the set threshold, the data instance belongs to the first configuration policy.
In this embodiment, the system intelligently generates a personalized configuration policy according to the network stability and compatibility of the user equipment, so as to ensure that the video conference can have optimal performance under various situations. Through real-time monitoring and judging, the system can dynamically adjust configuration in the video conference process so as to adapt to actual changes of equipment performance and network environment and provide more stable and smooth video experience. The system provides configuration of different resolutions and frame rates for different network conditions and device compatibility, so that definition and fluency of video pictures are improved, and participation of users in a conference is enhanced. Through setting and comparison of threshold values, the system realizes a certain degree of automatic decision, reduces the configuration burden of users and improves the usability of the system.
Example 5
The embodiment is explained in embodiment 1, referring to fig. 1, specifically, the second user environment collecting unit 4 is configured to collect an environment data set in real time, and the specific steps include:
S21, acquiring regional information of the user equipment by using equipment positioning service, and realizing acquisition by calling a positioning interface of the equipment;
S22, a prompt is sent to a user, the user is requested to allow the application program to access the ambient light sensor of the device, and the application program sends a dialog box for notifying or popping up the request authority; once the user allows access to the ambient light sensor, the real-time light intensity value gxqdz and the user device camera aperture value gqz of the current environment are collected, and are achieved by calling the ambient light sensor interface of the device;
s23, when a user inputs voice, background information of the current environment is collected at the same time, a small section of audio is recorded to capture sound and sound characteristics in the environment, and a microphone of the voice input device is used for obtaining the environment audio; capturing noise in the environment using a microphone of the device while the speech is input; and analyzing to obtain a real-time noise value Zyz and a voice signal value Xhz;
S24, the collected user area, the real-time light intensity value gxqdz, the aperture value gqz of the user equipment camera, the real-time noise value Zyz and the voice signal value Xhz are sorted and stored, and a user environment data set is established.
Specifically, the second analysis unit 5 includes a video exposure analysis module 51, a noise impact analysis module 52, and a user liveness calculation module 53;
The video exposure analysis module 51 is configured to extract a real-time light intensity value gxqdz in the user environment data set and a user equipment camera aperture value gqz, and generate a video exposure coefficient BGx according to the following formula after dimensionless processing;
wherein W1 and W2 are represented as a proportionality coefficient of the real-time light intensity value gxqdz and the user equipment camera aperture value gqz, where w1=0.6 and w2=0.4;
The noise influence analysis module 52 is configured to extract the real-time noise value Zyz and the voice signal value Xhz in the user environment data set, and generate the noise influence coefficient Zyx according to the following formula after dimensionless processing:
where W3 and W4 are represented as scaling coefficients of the real-time noise value Zyz and the speech signal value Xhz, where w3=0.6 and w4=0.4;
the user activity calculating module 53 is configured to collect a user operation frequency cPL, a conference use duration SJ, and a voice input frequency yPL of a user in a platform video conference process, and generate a user activity Hyd according to the following formula after dimensionless processing:
In the formula, W5, W6, and W7 are expressed as weight values of the user operation frequency cPL, the conference use period SJ, and the voice input frequency yPL, and w5+w6+w7=1.0.
Specifically, the second analysis unit 5 further includes an associating module 54, where the associating module 54 is configured to associate the video exposure coefficient BGx, the noise influence coefficient Zyx, and the user activity Hyd, and generate the integrated trimming coefficient Wtxs according to the following formula;
in the method, in the process of the invention, Weight values respectively representing the video exposure coefficient BGx, the noise influence coefficient Zyx, and the user activity Hyd, andC is denoted as a correction constant.
Specific data examples: for example, user region: the real-time light intensity value gxqdz is 80 in Shanghai city; the aperture value gqz of the camera of the user equipment is 50; calculation results in BGx =0.680+0.4x50/1=68;
The real-time noise value Zyz is 20, the voice signal value Xhz is 30, and Zyx =0.6×20+0.4×30/1=24 is calculated;
The user operation frequency cPL is 15 times, the conference using time length SJ is 120/min, and the voice input frequency yPL is 10 times; calculating to obtain hyd=0.4x15+0.3x120+0.3x10/1=64;
C=5, and substituting specific values of BGx, zyx, and Hyd into the formula:
Wtxs=(0.5×68+0.3×24+0.2×64)/1+5;
Wtxs=34+7.2+12.8+5;
Wtxs=59。
example 6
In this embodiment, as explained in embodiment 1, referring to fig. 1, specifically, the second configuration unit 6 is configured to compare the integrated trimming coefficient Wtxs with a third preset threshold R, where the third preset threshold R includes a threshold R1, a threshold R2, and a threshold R3, and obtain a corresponding trimming policy, and includes:
when the comprehensive fine tuning coefficient Wtxs is smaller than the threshold R1, a first fine tuning strategy is obtained, wherein the method comprises the steps of adjusting the exposure time of the camera to be more than the current 30%, and increasing the aperture size of the camera to f/2.8 to f/5.6 so as to balance the depth of field and the light entering quantity; when the comprehensive fine adjustment coefficient Wtxs is more than or equal to the threshold R1, the current light is qualified, and adjustment is not needed;
When the integrated trimming coefficient Wtxs is less than the threshold value R2, a second trimming strategy is obtained, including adjusting the sensitivity of the microphone to control reducing the capture of ambient noise; when the comprehensive fine adjustment coefficient Wtxs is more than or equal to the threshold R2, the current noise is qualified, and adjustment is not needed;
When the comprehensive fine tuning coefficient Wtxs is more than or equal to the threshold R3, a third fine tuning strategy is obtained, wherein the third fine tuning strategy comprises; more than 30% of computing resources are allocated for video encoding and decoding, response speed is improved, and when the comprehensive fine tuning coefficient Wtxs is smaller than the threshold R3, the current activity is normal, and adjustment is not needed.
Specific data example, wtxs =59 in example 5 is known.
The threshold values set are assumed as follows:
The threshold R1 is 50;
The threshold R2 is 45;
The threshold R3 is 55.
Now make the decision:
when Wtxs < R1, a first trimming strategy is obtained;
When Wtxs < R2, a second trimming strategy is obtained;
and when Wtxs is more than or equal to R3, obtaining a third fine tuning strategy.
Now substituting data to make a decision:
Wtxs < R1: 59<50 (unsatisfied, not enter the first trimming strategy)
Wtxs < R2: 59<45 (unsatisfied, not entering the second trimming strategy)
Wtxs is greater than or equal to R3: 59. not less than 55 (meet, enter the third trimming strategy)
According to the judgment, since Wtxs is larger than or equal to R3, the data example can obtain a third fine tuning strategy, namely, more than 30% of computing resources are allocated for video encoding and decoding, and the response speed is improved. Thus, according to the set threshold, the data instance belongs to a third trimming policy.
In this embodiment, by comparing the comprehensive trimming coefficient with the preset threshold, the system can generate a trimming policy, including camera exposure, microphone sensitivity, computing resource allocation, and the like, so as to implement intelligent adjustment of details of the video conference and improve overall system performance.
Example 7
This embodiment is explained in embodiment 1, referring to fig. 1, and specifically, the embodiment further includes a real-time sharing unit 7 for providing real-time text chat, file sharing and screen sharing functions for multiple users during a video conference, so as to facilitate collaboration and information exchange during the conference.
In this embodiment, providing a file sharing function enables conference participants to share and view files, which is useful for presenting information such as materials, charts, reports, etc. in a conference. The participants can view and discuss the file contents in real time, so that the conference efficiency is improved. The screen sharing function allows users to show content on their computer screen in a meeting. This is very helpful for demonstrating software, exposing designs, interpreting data, etc. The participants can watch and interact in time, and more specific and visual information transfer is realized.
Example 8
Referring to fig. 1 and 2, a multi-platform video conferencing method includes the steps of,
Step one, starting a video conference system, initializing each module, monitoring the performance and network condition of user equipment in real time through a first user data acquisition unit 1, and establishing a user data set; the first analysis unit 2 performs analysis and calculation according to the user data set to obtain a user equipment network stability coefficient Wdx and an equipment compatibility coefficient JRx; the first configuration unit 3 compares the network stability coefficient Wdx of the user equipment with the first stability threshold value X, and compares the network stability coefficient JRx of the user equipment with the second compatibility threshold value Q, so as to automatically generate a corresponding configuration strategy;
step two, performing first adjustment on the user according to the corresponding configuration strategy;
Step three, a second user environment acquisition unit 4 acquires an environment data set in real time, wherein the environment data set comprises user equipment areas, light intensity, environment background and noise data; the second analysis unit 5 calculates a video exposure coefficient BGx, a noise influence coefficient Zyx and user activity Hyd according to the user environment dataset; the correlation module 54 correlates BGx, zyx, and Hyd to obtain a comprehensive fine tuning coefficient Wtxs; the second configuration unit 6 compares the comprehensive fine tuning coefficient Wtxs with a third preset threshold value R to obtain a corresponding fine tuning strategy;
step four, performing secondary adjustment on the user according to the fine adjustment strategy;
and step five, the real-time sharing unit 7 provides real-time text chat, file sharing and screen sharing functions.
In the embodiment, the method dynamically adjusts the configuration and fine adjustment parameters by monitoring the user equipment and the environment conditions in real time, so that the adaptability and the user experience of the video conference system are improved. The real-time sharing function increases the convenience of collaboration and information sharing, so that the multi-platform video conference is more flexible and efficient. Overall, this approach helps to improve the performance and user satisfaction of the videoconferencing system.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A multi-platform videoconferencing system, characterized by: the system comprises a first user data acquisition unit (1), a first analysis unit (2), a first configuration unit (3), a second user environment acquisition unit (4), a second analysis unit (5) and a second configuration unit (6);
the first user data acquisition unit (1) is used for monitoring the performance and network condition of user equipment in real time and establishing a user data set;
the first analysis unit (2) is configured to perform analysis and calculation according to a user data set to obtain: a user equipment network stability coefficient Wdx and a device compatibility coefficient JRx;
The first configuration unit (3) is configured to compare the network stability coefficient Wdx of the user equipment with a first stability threshold value X, and compare the network stability coefficient JRx of the user equipment with a second compatibility threshold value Q, so as to automatically generate a corresponding configuration policy;
The second user environment acquisition unit (4) is used for positioning a user equipment area, prompting a user to start an ambient light sensor in a camera of the equipment, acquiring the light intensity of the current environment of the user, acquiring the environment background when the voice of the user is input, and acquiring environment noise data; establishing a user environment data set;
The second analysis unit (5) is used for carrying out analysis and calculation according to the user environment data set to obtain: video exposure coefficient BGx, noise impact coefficient Zyx, and user liveness Hyd; correlating the video exposure coefficient BGx, the noise influence coefficient Zyx and the user activity Hyd to obtain a comprehensive fine tuning coefficient Wtxs;
the second configuration unit (6) is configured to compare the integrated trimming coefficient Wtxs with a third preset threshold R to obtain a corresponding trimming policy.
2. A multi-platform videoconferencing system according to claim 1, wherein: the first user data acquisition unit (1) comprises a memory capacity acquisition module (11), a processor performance acquisition module (12) and a network data acquisition module (13);
The memory capacity acquisition module (11) is used for acquiring the actual memory capacity NCRL of the user equipment, and takes GB as a unit;
The processor performance acquisition module (12) is used for acquiring a processor main frequency value ZP and a core number HXS of the user equipment;
the network data acquisition module (13) is used for acquiring an uploading bandwidth SCDK, a downloading bandwidth XZDK, a round trip time RTT, a response time XT and a packet loss rate DBL in the running process of the user equipment.
3. A multi-platform videoconferencing system, according to claim 2, wherein: the first analysis unit (2) comprises a user data set processing module (21), a network stability calculation module (22) and a device compatibility calculation module (23);
The user data set processing module (21) is used for processing the user data set, and comprises data cleaning, format standardization, removal processing, time stamp processing and data normalization;
The network stability calculation module (22) is configured to extract an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT, and a packet loss rate DBL in the user data set, and generate a user equipment network stability coefficient Wdx according to the following formula:
,
wherein K1, K2, K3, K4 and K5 represent weight values of an upload bandwidth SCDK, a download bandwidth XZDK, a round trip time RTT, a response time XT and a packet loss rate DBL, and k1+k2+k3+k4+k5=1.0;
The device compatibility calculating module (23) is configured to extract an actual memory capacity NCRL in the user data set, a processor dominant frequency value ZP, and a core number HXS, and generate a device compatibility coefficient JRx according to the following formula:
,
Where K6, K7 and K8 represent the weights of the actual memory capacity NCRL, the processor main frequency value ZP and the core number HXS, and k6+k7+k8=1.0.
4. A multi-platform videoconferencing system according to claim 1, wherein: the first configuration unit (3) is configured to generate a corresponding configuration policy according to the user equipment network stability coefficient Wdx and the device compatibility coefficient JRx, and includes the following procedures:
Comparing the user equipment network stability coefficient Wdx with a first stability threshold X, comparing the equipment compatibility coefficient JRx with a second compatibility threshold Q when the user equipment network stability coefficient Wdx is more than or equal to the first stability threshold X, and automatically generating a first configuration strategy when the equipment compatibility coefficient JRx is more than or equal to the second compatibility threshold Q; the first configuration strategy comprises configuring video resolution to be 4K-8K (UHD) and frame rate to be 120 frames/second;
when the network stability coefficient Wdx of the user equipment is smaller than the first stability threshold X, but the equipment compatibility coefficient JRx is larger than or equal to the second compatibility threshold Q, automatically generating a second configuration strategy; the second configuration strategy comprises the steps of configuring video resolution to be 720p and configuring video frame rate to be 60 frames/second;
When the network stability coefficient Wdx of the user equipment is more than or equal to the first stability threshold X, but the equipment compatibility coefficient JRx is less than the second compatibility threshold Q, automatically generating a third configuration strategy; the third configuration strategy comprises the steps of configuring the video resolution to be 1080p, and configuring the frame rate to be 60 frames/second;
When the network stability coefficient Wdx of the user equipment is smaller than the first stability threshold X and the equipment compatibility coefficient JRx of the user equipment is smaller than the second compatibility threshold Q, automatically generating a fourth configuration strategy; the fourth configuration strategy includes configuring the video resolution to be 480p and the frame rate to be 30 frames/second.
5. A multi-platform videoconferencing system according to claim 1, wherein: the second user environment acquisition unit (4) is used for acquiring an environment data set in real time, and the specific steps comprise the following steps:
S21, acquiring regional information of the user equipment by using equipment positioning service, and realizing acquisition by calling a positioning interface of the equipment;
S22, a prompt is sent to a user, the user is requested to allow the application program to access the ambient light sensor of the device, and the application program sends a dialog box for notifying or popping up the request authority; once the user allows access to the ambient light sensor, the real-time light intensity value gxqdz and the user device camera aperture value gqz of the current environment are collected, and are achieved by calling the ambient light sensor interface of the device;
s23, when a user inputs voice, background information of the current environment is collected at the same time, a small section of audio is recorded to capture sound and sound characteristics in the environment, and a microphone of the voice input device is used for obtaining the environment audio; capturing noise in the environment using a microphone of the device while the speech is input; and analyzing to obtain a real-time noise value Zyz and a voice signal value Xhz;
S24, the collected user area, the real-time light intensity value gxqdz, the aperture value gqz of the user equipment camera, the real-time noise value Zyz and the voice signal value Xhz are sorted and stored, and a user environment data set is established.
6. A multi-platform videoconferencing system according to claim 5, wherein: the second analysis unit (5) comprises a video exposure analysis module (51), a noise influence analysis module (52) and a user activity calculation module (53);
the video exposure analysis module (51) is used for extracting a real-time light intensity value gxqdz in a user environment data set and a user equipment camera aperture value gqz, and generating a video exposure coefficient BGx through the following formula after dimensionless processing;
,
wherein W1 and W2 are represented as a proportionality coefficient of the real-time light intensity value gxqdz and the user equipment camera aperture value gqz, where w1=0.6 and w2=0.4;
The noise influence analysis module (52) is configured to extract the real-time noise value Zyz and the voice signal value Xhz in the user environment data set, and generate a noise influence coefficient Zyx according to the following formula after dimensionless processing:
,
where W3 and W4 are represented as scaling coefficients of the real-time noise value Zyz and the speech signal value Xhz, where w3=0.6 and w4=0.4;
The user liveness calculation module (53) is used for collecting user operation frequency cPL, conference use duration SJ and voice input frequency yPL in the platform video conference process, and generating user liveness Hyd by the following formula after dimensionless processing:
,
In the formula, W5, W6, and W7 are expressed as weight values of the user operation frequency cPL, the conference use period SJ, and the voice input frequency yPL, and w5+w6+w7=1.0.
7. A multi-platform videoconferencing system according to claim 6, wherein: the second analysis unit (5) further comprises an association module (54), wherein the association module (54) is used for associating the video exposure coefficient BGx, the noise influence coefficient Zyx and the user activity Hyd, and generating a comprehensive fine tuning coefficient Wtxs through the following formula;
,
in the method, in the process of the invention, 、/>、/>Weight values representing the video exposure coefficient BGx, the noise impact coefficient Zyx, and the user liveness Hyd, respectively, and/>C is denoted as a correction constant.
8. A multi-platform videoconferencing system according to claim 7, wherein: the second configuration unit (6) is configured to compare the integrated trimming coefficient Wtxs with a third preset threshold R, where the third preset threshold R includes a threshold R1, a threshold R2, and a threshold R3, and obtain a corresponding trimming policy, and includes:
when the comprehensive fine tuning coefficient Wtxs is smaller than the threshold R1, a first fine tuning strategy is obtained, wherein the method comprises the steps of adjusting the exposure time of the camera to be more than the current 30%, and increasing the aperture size of the camera to f/2.8 to f/5.6 so as to balance the depth of field and the light entering quantity; when the comprehensive fine adjustment coefficient Wtxs is more than or equal to the threshold R1, the current light is qualified, and adjustment is not needed;
When the integrated trimming coefficient Wtxs is less than the threshold value R2, a second trimming strategy is obtained, including adjusting the sensitivity of the microphone to control reducing the capture of ambient noise; when the comprehensive fine adjustment coefficient Wtxs is more than or equal to the threshold R2, the current noise is qualified, and adjustment is not needed;
When the comprehensive fine tuning coefficient Wtxs is more than or equal to the threshold R3, a third fine tuning strategy is obtained, wherein the third fine tuning strategy comprises; more than 30% of computing resources are allocated for video encoding and decoding, response speed is improved, and when the comprehensive fine tuning coefficient Wtxs is smaller than the threshold R3, the current activity is normal, and adjustment is not needed.
9. A multi-platform videoconferencing system of claim 8, wherein: the system also comprises a real-time sharing unit (7) which is used for providing real-time text chat, file sharing and screen sharing functions for multiple users in the video conference process so as to facilitate collaboration and information exchange during the conference.
10. A multi-platform video conference method applied to the multi-platform video conference system of any one of claims 1 to 9, characterized in that: comprises the steps of,
Step one, starting a video conference system, initializing each module, monitoring the performance and network condition of user equipment in real time through a first user data acquisition unit (1), and establishing a user data set; the first analysis unit (2) performs analysis and calculation according to the user data set to obtain a user equipment network stability coefficient Wdx and an equipment compatibility coefficient JRx; the first configuration unit (3) compares the network stability coefficient Wdx of the user equipment with a first stability threshold X and compares the network stability coefficient JRx of the user equipment with a second compatibility threshold Q to automatically generate a corresponding configuration strategy;
step two, performing first adjustment on the user according to the corresponding configuration strategy;
Step three, a second user environment acquisition unit (4) acquires an environment data set in real time, wherein the environment data set comprises user equipment areas, light intensity, environment background and noise data; a second analysis unit (5) calculates a video exposure coefficient BGx, a noise influence coefficient Zyx and a user activity Hyd according to the user environment dataset; the correlation module (54) correlates BGx, zyx and Hyd to obtain a comprehensive fine tuning coefficient Wtxs; the second configuration unit (6) compares the comprehensive fine tuning coefficient Wtxs with a third preset threshold value R to obtain a corresponding fine tuning strategy;
step four, performing secondary adjustment on the user according to the fine adjustment strategy;
And fifthly, the real-time sharing unit (7) provides real-time text chat, file sharing and screen sharing functions.
CN202410033903.3A 2024-01-10 2024-01-10 Multi-platform video conference method and system Active CN117560464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410033903.3A CN117560464B (en) 2024-01-10 2024-01-10 Multi-platform video conference method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410033903.3A CN117560464B (en) 2024-01-10 2024-01-10 Multi-platform video conference method and system

Publications (2)

Publication Number Publication Date
CN117560464A CN117560464A (en) 2024-02-13
CN117560464B true CN117560464B (en) 2024-05-03

Family

ID=89814981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410033903.3A Active CN117560464B (en) 2024-01-10 2024-01-10 Multi-platform video conference method and system

Country Status (1)

Country Link
CN (1) CN117560464B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405234A (en) * 2020-04-17 2020-07-10 杭州大轶科技有限公司 Video conference information system and method with integration of cloud computing and edge computing
WO2022046168A1 (en) * 2020-08-31 2022-03-03 Peters Michael H Architecture for scalable video conference management
CN115086779A (en) * 2021-12-17 2022-09-20 浙江大华技术股份有限公司 Video transmission system
CN115665362A (en) * 2022-09-26 2023-01-31 视联动力信息技术股份有限公司 Video conference processing method and device, electronic equipment and storage medium
CN116320271A (en) * 2023-05-15 2023-06-23 深圳市云屋科技有限公司 High-capacity video conference system based on cloud computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405234A (en) * 2020-04-17 2020-07-10 杭州大轶科技有限公司 Video conference information system and method with integration of cloud computing and edge computing
WO2022046168A1 (en) * 2020-08-31 2022-03-03 Peters Michael H Architecture for scalable video conference management
CN115086779A (en) * 2021-12-17 2022-09-20 浙江大华技术股份有限公司 Video transmission system
CN115665362A (en) * 2022-09-26 2023-01-31 视联动力信息技术股份有限公司 Video conference processing method and device, electronic equipment and storage medium
CN116320271A (en) * 2023-05-15 2023-06-23 深圳市云屋科技有限公司 High-capacity video conference system based on cloud computing

Also Published As

Publication number Publication date
CN117560464A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
EP3562163A1 (en) Audio-video synthesis method and system
CN106488265A (en) A kind of method and apparatus sending Media Stream
CN101141610A (en) Apparatus and method for video mixing and computer readable medium
CN104427286B (en) A kind of method and system carrying out video calling
US11605392B2 (en) Automatic gain control based on machine learning level estimation of the desired signal
CN113891175B (en) Live broadcast push flow method, device and system
CN109120947A (en) A kind of the voice private chat method and client of direct broadcasting room
CN111225209A (en) Video data plug flow method, device, terminal and storage medium
CN113099155A (en) Video conference system suitable for multiple scenes
US11973999B2 (en) User chosen watch parties
CN109104616A (en) A kind of voice of direct broadcasting room connects wheat method and client
CN112351237A (en) Automatic switching decision algorithm for main video of video conference
CN117560464B (en) Multi-platform video conference method and system
US20210227005A1 (en) Multi-user instant messaging method, system, apparatus, and electronic device
CN107426200B (en) Multimedia data processing method and device
CN108712407A (en) A kind of audio/video live broadcasting method and its system based on browser
US20230146871A1 (en) Audio data processing method and apparatus, device, and storage medium
CN114449311B (en) Network video exchange system and method based on efficient video stream forwarding
CN106664432A (en) Multimedia information play methods and systems, acquisition equipment, standardized server
Qiang Consumption reduction solution of TV news broadcast system based on wireless communication network
CN108419124A (en) A kind of audio-frequency processing method
EP3259906B1 (en) Handling nuisance in teleconference system
CN113965700A (en) Automatic adjusting method and system for intelligent television scene
CN117336434A (en) PaaS architecture-based cloud video conference working method and system
CN106412663A (en) Live broadcast method, live broadcast apparatus and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant