CN115914544A - Intelligent detection method and system for video conference communication quality - Google Patents

Intelligent detection method and system for video conference communication quality Download PDF

Info

Publication number
CN115914544A
CN115914544A CN202211428544.9A CN202211428544A CN115914544A CN 115914544 A CN115914544 A CN 115914544A CN 202211428544 A CN202211428544 A CN 202211428544A CN 115914544 A CN115914544 A CN 115914544A
Authority
CN
China
Prior art keywords
communication
multiple groups
network speed
network
recording data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211428544.9A
Other languages
Chinese (zh)
Inventor
白瑞双
杨猛
孙国政
付维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yixun Technology Co ltd
Original Assignee
Yixun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yixun Technology Co ltd filed Critical Yixun Technology Co ltd
Priority to CN202211428544.9A priority Critical patent/CN115914544A/en
Publication of CN115914544A publication Critical patent/CN115914544A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Telephonic Communication Services (AREA)

Abstract

The invention provides an intelligent detection method and system for communication quality of a video conference, which relate to the technical field of communication quality detection and are used for acquiring a plurality of groups of network environment parameters based on positioning coordinate information of a participant; traversing communication equipment information to obtain equipment communication noise parameters, setting quality evaluation indexes to train a communication quality detection model, carrying out model detection on the equipment communication noise parameters, network speed parameters and network line numbers, obtaining audio quality detection results and displaying quality detection results, carrying out communication management based on the method, solving the technical problems that the detection method is not intelligent, the process is complicated, deviation is easy to occur, the detection result accuracy is not enough, and further the influence is caused to subsequent communication adjustment, constructing the model to carry out multi-intervention object communication detection by carrying out multi-dimensional network communication analysis, and carrying out frame rate adjustment optimization based on the detection results so as to improve the communication quality of a video conference.

Description

Intelligent detection method and system for communication quality of video conference
Technical Field
The invention relates to the technical field of communication quality detection, in particular to an intelligent detection method and system for video conference communication quality.
Background
With the development and popularization of network communication technology, online conferences have become the current mainstream conference mode, and the limitations of external environments such as positions and the like can be broken through the online conferences to realize barrier-free communication, however, in order to guarantee the quality of the online conferences, objective factor influences in the conference process need to be avoided as much as possible.
In the prior art, when the quality of a communication video is detected, the detection method is not intelligent enough, the process is complicated, deviation is easy to occur, the accuracy of the detection result is not enough, and further subsequent communication adjustment is influenced.
Disclosure of Invention
The application provides an intelligent detection method and system for communication quality of a video conference, which are used for solving the technical problems that when the quality of communication videos existing in the prior art is detected, the detection method is not intelligent enough, the process is complicated and deviation is easy to occur, the accuracy of a detection result is not enough, and further the influence is caused on subsequent communication adjustment.
In view of the foregoing problems, the present application provides an intelligent detection method and system for video conference communication quality.
In a first aspect, the present application provides a method for intelligently detecting communication quality of a video conference, where the method includes:
acquiring a plurality of pieces of participating object basic information, wherein the plurality of pieces of participating object basic information comprise positioning coordinate information and communication equipment information;
traversing the positioning coordinate information to obtain a plurality of groups of network environment parameters, wherein the plurality of groups of network environment parameters comprise network line numbers and network speed parameters, and the network line numbers correspond to the network speed parameters one to one;
traversing the communication equipment information to obtain an equipment communication noise parameter;
setting an audio quality evaluation index and a display quality evaluation index;
training a communication quality detection model according to the audio quality evaluation index and the display quality evaluation index;
inputting the equipment communication noise parameter, the network speed parameter and the network line number into the communication quality detection model, and outputting an audio quality detection result and a display quality detection result;
and carrying out communication management according to the audio quality detection result and the display quality detection result.
In a second aspect, the present application provides an intelligent detection system for communication quality of a video conference, the system includes:
the system comprises an information acquisition module, a communication module and a communication module, wherein the information acquisition module is used for acquiring basic information of a plurality of participating objects, and the basic information of the plurality of participating objects comprises positioning coordinate information and communication equipment information;
an environment parameter obtaining module, configured to traverse the positioning coordinate information and obtain multiple sets of network environment parameters, where the multiple sets of network environment parameters include a network line number and a network speed parameter, and the network line number corresponds to the network speed parameter one to one;
the noise parameter acquisition module is used for traversing the communication equipment information to acquire an equipment communication noise parameter;
the index setting module is used for setting audio quality evaluation indexes and display quality evaluation indexes;
the model training module is used for training a communication quality detection model according to the audio quality evaluation index and the display quality evaluation index;
the quality detection module is used for inputting the equipment communication noise parameter, the network speed parameter and the network line number into the communication quality detection model, outputting an audio quality detection result and displaying the quality detection result;
and the communication management module is used for carrying out communication management according to the audio quality detection result and the display quality detection result.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
according to the intelligent detection method for the communication quality of the video conference, the basic information of a plurality of conference objects is obtained, wherein the basic information of the plurality of conference objects comprises positioning coordinate information and communication equipment information; traversing the positioning coordinate information to obtain a plurality of groups of network environment parameters, wherein the plurality of groups of network environment parameters comprise network line numbers and network speed parameters, and the network line numbers correspond to the network speed parameters one to one; traversing the communication equipment information to obtain an equipment communication noise parameter; setting an audio quality evaluation index and a display quality evaluation index; training a communication quality detection model according to the audio quality evaluation index and the display quality evaluation index; inputting the equipment communication noise parameter, the network speed parameter and the network line number into the communication quality detection model, and outputting an audio quality detection result and a display quality detection result; according to the audio quality detection result and the display quality detection result, communication management is carried out, the technical problems that in the prior art, when the quality of communication videos is detected, the detection method is not intelligent enough, the process is complex, deviation is prone to occurring, the accuracy of the detection result is not enough, and further influence is caused on subsequent communication adjustment are solved.
Drawings
Fig. 1 is a schematic flow chart of an intelligent detection method for communication quality of a video conference provided by the present application;
fig. 2 is a schematic diagram illustrating a flow of acquiring multiple sets of network environment parameters in an intelligent detection method for communication quality of a video conference according to the present application;
fig. 3 is a schematic diagram illustrating a communication management flow in an intelligent detection method for communication quality of a video conference according to the present application;
fig. 4 is a schematic structural diagram of an intelligent detection system for video conference communication quality according to the present application.
Description of reference numerals: the system comprises an information acquisition module 11, an environmental parameter acquisition module 12, a noise parameter acquisition module 13, an index setting module 14, a model training module 15, a quality detection module 16 and a communication management module 17.
Detailed Description
The application provides an intelligent detection method and system for communication quality of a video conference, and the method and system are used for solving the technical problems that when the quality of communication video is detected in the prior art, the detection method is not intelligent enough, the process is complicated and deviation is easy to occur, the detection result precision is not enough, and further the influence is caused on subsequent communication adjustment.
Example one
As shown in fig. 1, the present application provides a method for intelligently detecting communication quality of a video conference, where the method includes:
step S100: acquiring basic information of a plurality of participant objects, wherein the basic information of the plurality of participant objects comprises positioning coordinate information and communication equipment information;
specifically, with the development and popularization of network communication technology, online conferences become the current mainstream conference mode, the limitation of external environments such as positions and the like can be broken through the online conferences, barrier-free communication is achieved, however, in order to guarantee the quality of the online conferences, the influence of objective factors on the conference process is required to be avoided as much as possible.
Step S200: traversing the positioning coordinate information to obtain a plurality of groups of network environment parameters, wherein the plurality of groups of network environment parameters comprise network line numbers and network speed parameters, and the network line numbers correspond to the network speed parameters one to one;
step S300: traversing the communication equipment information to obtain an equipment communication noise parameter;
specifically, the method includes the steps of obtaining positioning coordinate information by positioning a plurality of meeting objects, randomly extracting information based on the positioning coordinate information, determining the positioning coordinate information of any meeting object, wherein the same positioning coordinate information is associated with a plurality of corresponding network lines, determining corresponding network line numbers to identify and distinguish the network lines, extracting a single network line number, determining a plurality of environment states including thunderstorm weather, sunny weather and cloudy and rainy weather, respectively performing network speed data acquisition on the plurality of environment states based on a preset time interval, performing data integration processing based on a time sequence, determining a plurality of groups of data acquisition results, performing data concentration value analysis on the analysis, weakening a data fluctuation state, determining network speed data with environment representativeness and interval representativeness, adding the network environment parameters, corresponding to three network environment parameters by the same network line number, and respectively performing parameter extraction analysis on the plurality of associated network lines on a plurality of positioning coordinates contained in the positioning coordinate information to generate a plurality of groups of network environment parameters.
Meanwhile, the performance of communication equipment used by participants is also an important influence factor, equipment in a conference process has certain mechanical noise, along with a conference sound source, the extraction of the participants to conference key points is influenced by the existence of the equipment mechanical noise, the equipment types are different, the corresponding mechanical noise levels are different, for example, equipment heat dissipation, signal electromagnetic vibration and the like can generate certain noise, the information of the communication equipment corresponds to a plurality of participants, the noise evaluation parameters of each piece of communication equipment are determined to be used as the equipment communication noise parameters, meanwhile, noise acceptability evaluation can be carried out by analyzing the relative noise parameters of the mechanical noise to the conference sound source, and the plurality of groups of network environment parameters and the equipment communication noise parameters are used as communication quality indexes to evaluate, so that the foundation is laid for the subsequent communication quality detection.
Further, as shown in fig. 2, traversing the positioning coordinate information to obtain a plurality of sets of network environment parameters, where the plurality of sets of network environment parameters include a network line number and a network speed parameter, where the network line number corresponds to the network speed parameter one to one, and step S200 of the present application further includes:
step S210: matching a plurality of groups of network speed time sequence data in a first time interval, a plurality of groups of network speed time sequence data in a second time interval and a plurality of groups of network speed time sequence data in a third time interval according to the network line number;
step S220: traversing the network speed time sequence data in the multiple groups of first time intervals, the network speed time sequence data in the multiple groups of second time intervals and the network speed time sequence data in the multiple groups of third time intervals to carry out centralized value distribution, and generating a first time zone network speed centralized evaluation value, a second time zone network speed centralized evaluation value and a third time zone network speed centralized evaluation value;
step S230: and adding the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value into the network speed parameter.
Specifically, extracting a communication network line based on the positioning coordinate information, determining the network line number corresponding to each positioning coordinate information, wherein the network line number is line identification information and is used for identifying and distinguishing network lines, setting a preset time interval, namely a time interval for data acquisition, acquiring network speed acquisition data under different weather conditions based on the preset time interval, and acquiring network speed time sequence data in a first time interval, namely network speed data in the preset time interval under thunderstorm weather, wherein the network speed time sequence data corresponds to weather changes and is synchronous with the network speed data; network speed time sequence data in the second time interval, namely network speed data in a preset time interval under clear weather; and network speed time sequence data in the third time interval, namely network speed data in a preset time interval during cloudy and rainy days, is used for ensuring the completeness of a data acquisition result, meanwhile, certain fluctuation exists in the network speed along with the time in the time interval, the network speed in the fluctuation interval is in a normal state, and the difference of the network speeds can cause asynchronization of information reception and influence the normal running of a conference.
Further carrying out data frequency statistics on the obtained network speed time sequence data in the plurality of groups of first time intervals to obtain a network speed centralized evaluation formula,
Figure BDA0003943415900000071
wherein, t k Duration, v, characterizing kth network speed k Characterizing the kth network speed, V i Characterizing the i-th group of network speed centralized evaluation results, n i Data quantity of the ith group is characterized, and M represents the total group number. The parameters are acquired through data acquisition and statistics gathering, the network speed time sequence data in the first time interval are subjected to centralized value calculation based on the network speed centralized evaluation formula, the corresponding network speed centralized value in the group of data, namely the network speed with interval representativeness, is determined and serves as the first time zone network speed centralized evaluation value, the data volume to be analyzed can be effectively reduced through data centralized evaluation, and the data analysis efficiency is improvedAnd calculating a centralized value of the network speed time sequence data, generating a second time zone network speed centralized evaluation value and a third time zone network speed centralized evaluation value, and adding the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value into the network speed parameters for network line analysis under different external environments so as to ensure the environment adaptability of an analysis result.
Step S400: setting an audio quality evaluation index and a display quality evaluation index;
step S500: training a communication quality detection model according to the audio quality evaluation index and the display quality evaluation index;
specifically, when communication quality evaluation is performed, the sound level and the sound quality definition of a communication audio need to be evaluated, the fluency and the picture definition of a communication video need to be evaluated, a sound loudness index and a sound definition index are set as the audio quality evaluation indexes, a display fluency index and a display definition index serve as the display quality evaluation indexes, further, sample data collection is performed based on a preset time period, detection submodels are respectively constructed based on the sound loudness index, the sound definition index, the display fluency index and the display definition index, association submodels are respectively merged to generate an audio quality detection module and a display quality detection module, the modules are further merged to generate the communication quality detection model, when input information detection is performed, layer-by-layer identification division can be performed, detection of demand data is performed based on the corresponding submodels, and the output results of the submodels are identified and integrated to serve as the output results of the communication quality detection model.
Further, the step S500 of training a communication quality detection model according to the audio quality assessment indicator and the display quality assessment indicator further includes:
step S510: the audio quality evaluation index comprises a sound loudness index and a sound definition index;
step S520: the display quality evaluation index comprises a display fluency index and a display definition index;
step S530: collecting multiple groups of audio recording data and multiple groups of video recording data based on multiple groups of equipment communication noise recording data, multiple groups of network line number recording data and multiple groups of network speed recording data;
step S540: identifying the multiple groups of audio recording data according to the sound loudness indexes and the sound definition indexes to generate multiple groups of sound loudness identification results and multiple groups of sound definition identification results;
step S550: identifying the multiple groups of video recording data according to the display fluency index and the display definition index to generate multiple groups of display fluency identification results and multiple groups of display definition identification results;
step S560: training an audio quality detection module according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data, the multiple groups of sound loudness identification results and the multiple groups of sound definition identification results;
step S570: training a display quality detection module according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data, the multiple groups of display fluency identification results and the multiple groups of display definition identification results;
step S580: and combining the audio quality detection module and the display quality detection module to generate the communication quality detection model.
Specifically, the audio quality evaluation index and the display instruction evaluation index are obtained, the audio quality evaluation index includes a sound loudness index and a sound definition index, and is used for performing communication sound level determination and communication voice definition determination, the display quality evaluation index includes a display fluency index and a display definition index, and is used for determining interval duration between each frame of image, determining video promotion fluency and human image definition, determining the multiple groups of equipment communication noise recording data, the multiple groups of network line numbering data and the multiple groups of network speed recording data, and the data correspond to each other respectively, determining multiple possible communication scenes, and further performing communication recording data acquisition on multiple groups of equipment based on a preset time period to obtain the multiple groups of audio recording data and the multiple groups of video recording data.
Further performing index evaluation on the multiple groups of audio recording data based on the sound loudness index and the sound definition index, preferably, setting an index evaluation interval, grading the set index, setting a multi-dimensional multi-grade index, respectively corresponding to unqualified, good and excellent indexes, performing index identification by taking the index evaluation interval as a reference, and generating multiple groups of sound loudness identification results and multiple groups of sound definition identification results, wherein one group of audio recording data corresponds to one group of identification results; similarly, the multiple groups of video recording data are graded based on the display fluency index and the display definition index, data identification is carried out based on grading results, and the multiple groups of display fluency identification results and the multiple groups of display definition identification results are generated.
Setting up the audio quality detection module by taking the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data, the multiple groups of sound loudness identification results and the multiple groups of sound definition identification results as sample data, wherein a sound loudness detection sub-model and a sound definition detection sub-model are embedded in the audio quality detection module, and are used for respectively detecting the loudness and definition of input audio; and training the display quality detection module by taking the plurality of groups of equipment communication noise recording data, the plurality of groups of network line number recording data, the plurality of groups of network speed recording data, the plurality of groups of display fluency identification results and the plurality of groups of display definition identification results as sample data, wherein a display fluency detection submodel and a display definition detection submodel are embedded in the display quality detection module and used for inputting fluency and definition detection identifications of videos, the audio quality detection module and the display quality detection module are combined to generate the communication quality detection model, the model is a multi-dimensional detection model, targeted detection is respectively carried out on multi-dimensional detection indexes, and then the detection results are integrated to be used as model output results.
Further, step S580 of the present application further includes:
step S581: training a sound loudness detection submodel according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of sound loudness identification results;
step S582: training a sound definition detection sub-model according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of sound definition identification results;
step S583: combining the input layers of the sound loudness detection submodel and the sound definition detection submodel to generate the audio quality detection module;
step S584: training a display fluency detection sub-model according to the plurality of groups of equipment communication noise recording data, the plurality of groups of network line number recording data, the plurality of groups of network speed recording data and the plurality of groups of display fluency identification results;
step S585: training a display definition detection sub-model according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of display definition identification results;
step S586: and combining the input layers of the display fluency detection submodel and the display definition detection submodel to generate the display quality detection module.
Specifically, the multiple groups of equipment communication record data, the multiple groups of network line number record data, the multiple groups of network speed record data and the multiple groups of sound loudness identification results are used as sample data, a sound loudness detection sub-model framework is constructed based on machine learning, the sample data is input, preferably, a training set and a verification set can be determined by carrying out sample division, the division ratio can be adjusted based on training scenes, model training and verification are carried out to generate the sound loudness detection sub-model, the output accuracy of the model is guaranteed to reach a preset standard, and the sound loudness detection sub-model is used for carrying out input identification and matching output of data; similarly, the multiple sets of device communication noise recording data, the multiple sets of network line number recording data, the multiple sets of network speed recording data and the multiple sets of sound definition identification results are used as sample data, the sound definition detection submodel is generated through model training and used for detecting the definition of a communication sound source, in the embodiment of the application, the existing submodel construction methods are consistent, the sound loudness detection submodel and the sound definition detection submodel are combined through input layers, the audio quality detection module is generated, exemplarily, the audio quality detection module can be in a multi-layer level mode, input audio data are divided based on an input identification layer, the division results are transmitted to a quality detection layer, the quality detection layer comprises two detection models, index detection judgment is carried out on the division results based on an adaptive model, and module output is carried out on the integration of the judgment results.
Further, using the multiple sets of equipment communication noise recording data, the multiple sets of network line number recording data, the multiple sets of network speed recording data and the multiple sets of display fluency identification results as sample data, and performing model training to generate the display fluency detection submodel; the method comprises the steps of training a display definition detection submodel based on a plurality of groups of equipment communication noise recording data, a plurality of groups of network line number recording data, a plurality of groups of network speed recording data and a plurality of groups of display definition identification results, wherein the submodel is a model of the same type, identifying and dividing input video data by carrying out model input lamination and generating a display quality detection module, determining two groups of model detection demand data, transmitting the two groups of model detection demand data to corresponding detection submodels, determining corresponding detection results by carrying out data matching evaluation and carrying out result identification, taking identification results as module output results, and respectively constructing detection models based on a plurality of detection indexes, so that the orderliness and the accuracy of data detection can be guaranteed, and the detection efficiency is improved.
Step S600: inputting the equipment communication noise parameter, the network speed parameter and the network line number into the communication quality detection model, and outputting an audio quality detection result and a display quality detection result;
step S700: and carrying out communication management according to the audio quality detection result and the display quality detection result.
Specifically, the equipment communication noise parameter, the network speed parameter and the network line number are acquired for a first video conference object, acquired data are correspondingly integrated based on time sequence, the acquired data are input into the communication quality detection model, corresponding detection demand data are respectively transmitted to corresponding detection modules by data identification and division, parameter information matching detection is further performed based on submodels in the modules, detection results are obtained and subjected to identification integration, the detection results are output as module detection results, the audio quality detection results and the display quality detection results are obtained, whether the audio quality detection results and the display quality detection results meet the audio quality evaluation index threshold interval and the display quality evaluation index threshold interval or not is further judged, when the detection results do not meet the module detection results, screening and evaluation of the communication lines are performed, optimal communication line switching is determined, when the switching line does not meet the standard, enhancement processing is performed on audio or video in the interval which does not meet the evaluation index threshold interval, communication quality adjustment is performed through external adjustment, and communication optimization management is achieved.
Further, as shown in fig. 3, the step S700 of performing communication management according to the audio quality detection result and the display quality detection result further includes:
step S710: acquiring an audio quality evaluation index threshold interval and a display quality evaluation index threshold interval;
step S720: when the audio quality detection result of the first video conference object does not meet the threshold interval of the audio quality assessment index, or/and the display quality detection result does not meet the threshold interval of the display quality assessment index, generating a first adjusting instruction;
step S730: performing dispersion evaluation on the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value of a plurality of network line numbers of the first video conference object according to the first adjustment instruction to generate a plurality of dispersion evaluation results;
step S740: screening the plurality of network line numbers according to the plurality of dispersion evaluation results and dispersion thresholds to generate a first recommended line number;
step S750: and carrying out video conference communication management according to the first recommended line number.
Specifically, a threshold interval is set for an index, an audio quality assessment index threshold interval and a display quality assessment index threshold interval, that is, a qualification degree determination interval of the index are obtained, the first video conference object is a participant object to be analyzed, audio quality detection and display quality detection are respectively performed on the participant object, whether the audio quality detection result and the display quality detection result satisfy the corresponding index threshold interval is determined, and when any result does not satisfy the corresponding index threshold interval, the first adjustment instruction, that is, a start instruction for communication adjustment is generated.
With the reception of the first adjustment instruction, the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value, and the third time zone network speed centralized evaluation value of each of the network line numbers covered by the first video conference corresponding to the first video conference are extracted, dispersion evaluation of the communication lines is performed respectively, illustratively, a difference value between a maximum value and a minimum value in the centralized evaluation values is calculated, the calculation result is used as the dispersion evaluation result to determine the network line communication stability, dispersion calculation is performed on the plurality of network line numbers respectively to generate the plurality of dispersion evaluation results, the dispersion threshold value, which is a critical value for performing qualified network line dispersion determination, is further set, the plurality of dispersion evaluation results are subjected to threshold determination, the dispersion evaluation result smaller than the dispersion threshold value is determined, for example, subjective screening is performed to determine a corresponding line number as the first recommended line number, the first line number is a qualified network line number, communication line switching adjustment of the communication lines is performed based on the first recommended line number, and adjustment of the communication quality of the communication line is performed.
Further, step S750 of the present application further includes:
step S751: acquiring communication meteorological parameters of the first video conference object according to the positioning coordinate information;
step S752: according to the communication meteorological parameters, screening the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value, and extracting a first video conference object network speed centralized evaluation value;
step S753: sequencing the first recommended line number according to the first video conference object network speed centralized evaluation value to generate a second recommended line number;
step S754: and carrying out video conference communication management according to the second recommended line number.
Specifically, the first recommended line number is a preliminary recommended result, the communication weather parameters of the first video conference object are collected based on the positioning coordinate information, the positioning coordinates are different, the corresponding communication weather parameters are different, the communication weather parameters of any conference object are determined, the first recommended line number is used as a screening standard, the first recommended line number is screened secondarily, the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value are matched based on the first time zone network speed centralized evaluation value, for example, when the communication weather parameters in the conference object positioning area are thunderstorm weather, the first time zone network speed centralized evaluation value is used as a screening result, the first video conference object is respectively screened based on the network speed centralized evaluation value, the first video conference object network speed centralized evaluation value is generated, the first video conference object network speed centralized evaluation value is sorted from high to low based on the network speed, the second recommended line number is generated by reverse matching of the line numbers, the second recommended line number is a recommended line number, the second recommended line number is changed to a stable influence factor of the communication weather line number, and the communication weather line number is adjusted more closely according to the weather conditions before the conference object is changed.
Further, step S754 of the present application further includes:
step S7541: performing communication quality detection according to the second recommended line number, the first video conference object equipment communication noise parameter and the first video conference object network speed centralized evaluation value, and judging whether the audio quality evaluation index threshold interval and the display quality evaluation index threshold interval are met;
step S7542: if the audio quality evaluation index threshold interval is not met, performing audio enhancement processing on the first video conference object;
step S7543: and if the display quality evaluation index threshold interval is not met, performing video enhancement processing on the first video conference object.
Specifically, an optimal network line is determined based on the second recommended line number, communication quality overall evaluation is performed by combining the first video conference object device communication noise parameter and the first video conference object network speed centralized evaluation value, index evaluation is performed based on the communication quality detection model, a corresponding index evaluation result is obtained, whether the audio quality evaluation result meets the audio quality evaluation index threshold interval or not and whether the video quality evaluation result meets the display quality evaluation index threshold interval or not are further judged, when the audio quality evaluation result meets the display quality evaluation index threshold interval and both meet the display quality evaluation index threshold interval, the current communication quality is qualified, when the audio quality evaluation index threshold interval does not meet the audio quality evaluation threshold interval, audio enhancement processing can be performed by means of noise reduction of real-time transmitted audio, enhanced transmission after voice recognition and the like, the audio quality transmitted to the communication device is improved, when the display quality evaluation index threshold is not met, video quality optimization can be performed by means of noise reduction of real-time transmitted video, enhanced processing and the like, and the video quality meets the threshold requirement.
The intelligent detection method for the communication quality of the video conference, provided by the embodiment of the application, has the following technical effects:
1. the intelligent detection method for the communication quality of the video conference, provided by the application, is characterized in that a plurality of groups of network environment parameters are obtained based on the positioning coordinate information of the conference objects; the method comprises the steps of traversing communication equipment information to obtain equipment communication noise parameters, setting audio quality evaluation indexes and display quality evaluation indexes, training a communication quality detection model, carrying out model detection on the equipment communication noise parameters, network speed parameters and network line numbers, obtaining audio quality detection results and display quality detection results, carrying out communication management based on the audio quality detection results and the display quality detection results, and solving the technical problems that when communication video quality detection is carried out in the prior art, the detection method is not intelligent enough, the flow is complicated and deviation is easy to occur, the detection result accuracy is not enough, and further subsequent communication adjustment is influenced.
2. Through constructing a multi-dimensional index detection model, targeted index detection and result summarization of input data are performed, detection accuracy and detection efficiency can be effectively improved, adjustment and judgment are performed based on detection results, adaptive communication adjustment is performed based on real-time communication conditions, line switching or communication video enhancement is included, and intelligent communication control is achieved.
Example two
Based on the same inventive concept as the intelligent detection method for the communication quality of the video conference in the foregoing embodiment, as shown in fig. 4, the present application provides an intelligent detection system for the communication quality of the video conference, where the system includes:
the information acquisition module 11 is configured to acquire basic information of a plurality of participant objects, where the basic information of the plurality of participant objects includes location coordinate information and communication device information;
an environment parameter obtaining module 12, where the environment parameter obtaining module 12 is configured to traverse the positioning coordinate information and obtain multiple sets of network environment parameters, where the multiple sets of network environment parameters include network line numbers and network speed parameters, and the network line numbers and the network speed parameters are in one-to-one correspondence;
the noise parameter acquisition module 13 is configured to traverse the communication device information, and acquire a device communication noise parameter;
the index setting module 14, the index setting module 14 is used for setting an audio quality evaluation index and a display quality evaluation index;
the model training module 15, the said model training module 15 is used for evaluating the index according to said audio frequency quality and said display quality, train the detection model of communication quality;
the quality detection module 16, the quality detection module 16 is configured to input the device communication noise parameter, the network speed parameter, and the network line number into the communication quality detection model, output an audio quality detection result, and display a quality detection result;
and the communication management module 17, wherein the communication management module 17 is used for performing communication management according to the audio quality detection result and the display quality detection result.
Further, the system further comprises:
the data matching module is used for matching network speed time sequence data in a plurality of groups of first time intervals, network speed time sequence data in a plurality of groups of second time intervals and network speed time sequence data in a plurality of groups of third time intervals according to the network line number;
the centralized value generating module is used for traversing the network speed time sequence data in the multiple groups of first time intervals, the network speed time sequence data in the multiple groups of second time intervals and the network speed time sequence data in the multiple groups of third time intervals to carry out centralized value distribution, and generating a first time zone network speed centralized evaluation value, a second time zone network speed centralized evaluation value and a third time zone network speed centralized evaluation value;
a parameter adding module, configured to add the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value, and the third time zone network speed centralized evaluation value to the network speed parameter.
Further, the system further comprises:
the audio quality evaluation index analysis module is used for analyzing the audio quality evaluation indexes, wherein the audio quality evaluation indexes comprise a sound loudness index and a sound definition index;
the display quality evaluation index analysis module is used for analyzing the display quality evaluation indexes, wherein the display quality evaluation indexes comprise display fluency indexes and display definition indexes;
the data acquisition module is used for acquiring a plurality of groups of audio recording data and a plurality of groups of video recording data based on a plurality of groups of equipment communication noise recording data, a plurality of groups of network line number recording data and a plurality of groups of network speed recording data;
the audio data identification module is used for identifying the plurality of groups of audio recording data according to the sound loudness index and the sound definition index to generate a plurality of groups of sound loudness identification results and a plurality of groups of sound definition identification results;
the video data identification module is used for identifying the plurality of groups of video recording data according to the display fluency index and the display definition index to generate a plurality of groups of display fluency identification results and a plurality of groups of display definition identification results;
the audio quality detection module training module is used for training the audio quality detection module according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data, the multiple groups of sound loudness identification results and the multiple groups of sound definition identification results;
a display quality detection module training module for training the display quality detection module according to the plurality of sets of device communication noise recording data, the plurality of sets of network line number recording data, the plurality of sets of network speed recording data, the plurality of sets of display smoothness identification results, and the plurality of sets of display definition identification results;
and the model generation module is used for combining the audio quality detection module and the display quality detection module to generate the communication quality detection model.
Further, the system further comprises:
the sound loudness detection submodel training module is used for training a sound loudness detection submodel according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of sound loudness identification results;
the voice definition detection submodel training module is used for training a voice definition detection submodel according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of voice definition identification results;
the audio quality detection module generation module is used for combining the input layers of the sound loudness detection submodel and the sound definition detection submodel to generate the audio quality detection module;
the display fluency detection sub-model training module is used for training the display fluency detection sub-model according to the plurality of groups of equipment communication noise recording data, the plurality of groups of network line number recording data, the plurality of groups of network speed recording data and the plurality of groups of display fluency identification results;
the display definition detection submodel training module is used for training a display definition detection submodel according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of display definition identification results;
and the display quality detection module generation module is used for combining the input layers of the display fluency detection submodel and the display definition detection submodel to generate the display quality detection module.
Further, the system further comprises:
the interval acquisition module is used for acquiring an audio quality evaluation index threshold interval and a display quality evaluation index threshold interval;
an instruction generating module, configured to generate a first adjustment instruction when the audio quality detection result of the first video conference object does not satisfy the audio quality assessment indicator threshold interval or/and the display quality detection result does not satisfy the display quality assessment indicator threshold interval;
a dispersion evaluation module, configured to perform dispersion evaluation on the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value, and the third time zone network speed centralized evaluation value of the plurality of network line numbers of the first video conference object according to the first adjustment instruction, and generate a plurality of dispersion evaluation results;
the first recommended line number generation module is used for screening the network line numbers according to the dispersion evaluation results and the dispersion threshold values to generate a first recommended line number;
and the conference communication management module is used for carrying out video conference communication management according to the first recommended line number.
Further, the system further comprises:
the meteorological parameter acquisition module is used for acquiring communication meteorological parameters of the first video conference object according to the positioning coordinate information;
an evaluation value extraction module, configured to extract a first video conference object network speed centralized evaluation value by screening from the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value, and the third time zone network speed centralized evaluation value according to the communication weather parameter;
the second recommended line number generation module is used for sequencing the first recommended line number according to the first video conference object network speed centralized evaluation value to generate a second recommended line number;
and the conference management module is used for carrying out video conference communication management according to the second recommended line number.
Further, the system further comprises:
a threshold judgment module, configured to perform communication quality detection according to the second recommended line number, the first video conference object device communication noise parameter, and the first video conference object network speed centralized evaluation value, and judge whether the audio quality assessment index threshold interval and the display quality assessment index threshold interval are satisfied;
the audio enhancement module is used for carrying out audio enhancement processing on the first video conference object if the audio quality evaluation index threshold interval is not met;
and the video enhancement module is used for carrying out video enhancement processing on the first video conference object if the display quality evaluation index threshold interval is not met.
In the present specification, through the foregoing detailed description of the method for intelligently detecting communication quality of a video conference, it is clear to those skilled in the art that an intelligent method and system for detecting communication quality of a video conference in the present embodiment are provided.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An intelligent detection method for video conference communication quality is characterized by comprising the following steps:
acquiring basic information of a plurality of participant objects, wherein the basic information of the plurality of participant objects comprises positioning coordinate information and communication equipment information;
traversing the positioning coordinate information to obtain a plurality of groups of network environment parameters, wherein the plurality of groups of network environment parameters comprise network line numbers and network speed parameters, and the network line numbers correspond to the network speed parameters one to one;
traversing the communication equipment information to obtain an equipment communication noise parameter;
setting an audio quality evaluation index and a display quality evaluation index;
training a communication quality detection model according to the audio quality evaluation index and the display quality evaluation index;
inputting the equipment communication noise parameter, the network speed parameter and the network line number into the communication quality detection model, and outputting an audio quality detection result and a display quality detection result;
and carrying out communication management according to the audio quality detection result and the display quality detection result.
2. The method as claimed in claim 1, wherein said traversing said positioning coordinate information, obtaining a plurality of sets of network environment parameters, wherein said plurality of sets of network environment parameters includes a network line number and a network speed parameter, wherein said network line number and said network speed parameter have a one-to-one correspondence, comprising:
matching a plurality of groups of network speed time sequence data in a first time interval, a plurality of groups of network speed time sequence data in a second time interval and a plurality of groups of network speed time sequence data in a third time interval according to the network line number;
traversing the network speed time sequence data in the multiple groups of first time intervals, the network speed time sequence data in the multiple groups of second time intervals and the network speed time sequence data in the multiple groups of third time intervals to carry out centralized value distribution, and generating a first time zone network speed centralized evaluation value, a second time zone network speed centralized evaluation value and a third time zone network speed centralized evaluation value;
and adding the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value into the network speed parameter.
3. The method of claim 2, wherein training a communication quality detection model based on the audio quality assessment indicator and the display quality assessment indicator comprises:
the audio quality evaluation index comprises a sound loudness index and a sound definition index;
the display quality evaluation index comprises a display fluency index and a display definition index;
collecting multiple groups of audio recording data and multiple groups of video recording data based on multiple groups of equipment communication noise recording data, multiple groups of network line number recording data and multiple groups of network speed recording data;
identifying the multiple groups of audio recording data according to the sound loudness index and the sound definition index to generate multiple groups of sound loudness identification results and multiple groups of sound definition identification results;
identifying the multiple groups of video recording data according to the display fluency index and the display definition index to generate multiple groups of display fluency identification results and multiple groups of display definition identification results;
training an audio quality detection module according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data, the multiple groups of sound loudness identification results and the multiple groups of sound definition identification results;
training a display quality detection module according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data, the multiple groups of display smoothness identification results and the multiple groups of display definition identification results;
and combining the audio quality detection module and the display quality detection module to generate the communication quality detection model.
4. The method of claim 3, further comprising:
training a sound loudness detection submodel according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of sound loudness identification results;
training a sound definition detection sub-model according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of sound definition identification results;
combining the input layers of the sound loudness detection submodel and the sound definition detection submodel to generate the audio quality detection module;
training a display fluency detection sub-model according to the plurality of groups of equipment communication noise recording data, the plurality of groups of network line number recording data, the plurality of groups of network speed recording data and the plurality of groups of display fluency identification results;
training a display definition detection sub-model according to the multiple groups of equipment communication noise recording data, the multiple groups of network line number recording data, the multiple groups of network speed recording data and the multiple groups of display definition identification results;
and combining the input layers of the display fluency detection submodel and the display definition detection submodel to generate the display quality detection module.
5. The method of claim 4, wherein the performing communication management based on the audio quality detection result and the display quality detection result comprises:
acquiring an audio quality evaluation index threshold interval and a display quality evaluation index threshold interval;
when the audio quality detection result of the first video conference object does not meet the threshold interval of the audio quality assessment index, or/and the display quality detection result does not meet the threshold interval of the display quality assessment index, generating a first adjusting instruction;
according to the first adjustment instruction, carrying out dispersion evaluation on the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value of the plurality of network line numbers of the first video conference object to generate a plurality of dispersion evaluation results;
screening the plurality of network line numbers according to the plurality of dispersion evaluation results and dispersion thresholds to generate a first recommended line number;
and carrying out video conference communication management according to the first recommended line number.
6. The method of claim 5, further comprising:
acquiring communication meteorological parameters of the first video conference object according to the positioning coordinate information;
according to the communication meteorological parameters, screening the first time zone network speed centralized evaluation value, the second time zone network speed centralized evaluation value and the third time zone network speed centralized evaluation value, and extracting a first video conference object network speed centralized evaluation value;
sequencing the first recommended line number according to the first video conference object network speed centralized evaluation value to generate a second recommended line number;
and carrying out video conference communication management according to the second recommended line number.
7. The method of claim 6, further comprising:
performing communication quality detection according to the second recommended line number, the first video conference object equipment communication noise parameter and the first video conference object network speed centralized evaluation value, and judging whether the audio quality evaluation index threshold interval and the display quality evaluation index threshold interval are met;
if the audio quality evaluation index threshold interval is not met, performing audio enhancement processing on the first video conference object;
and if the display quality evaluation index threshold interval is not met, performing video enhancement processing on the first video conference object.
8. An intelligent detection system for video conference communication quality, the system comprising:
the system comprises an information acquisition module, a communication module and a communication module, wherein the information acquisition module is used for acquiring basic information of a plurality of participating objects, and the basic information of the plurality of participating objects comprises positioning coordinate information and communication equipment information;
an environment parameter obtaining module, configured to traverse the positioning coordinate information and obtain multiple sets of network environment parameters, where the multiple sets of network environment parameters include a network line number and a network speed parameter, and the network line number corresponds to the network speed parameter one to one;
the noise parameter acquisition module is used for traversing the communication equipment information to acquire an equipment communication noise parameter;
the index setting module is used for setting audio quality evaluation indexes and display quality evaluation indexes;
the model training module is used for training a communication quality detection model according to the audio quality evaluation index and the display quality evaluation index;
the quality detection module is used for inputting the equipment communication noise parameter, the network speed parameter and the network line number into the communication quality detection model, outputting an audio quality detection result and displaying the quality detection result;
and the communication management module is used for carrying out communication management according to the audio quality detection result and the display quality detection result.
CN202211428544.9A 2022-11-15 2022-11-15 Intelligent detection method and system for video conference communication quality Pending CN115914544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211428544.9A CN115914544A (en) 2022-11-15 2022-11-15 Intelligent detection method and system for video conference communication quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211428544.9A CN115914544A (en) 2022-11-15 2022-11-15 Intelligent detection method and system for video conference communication quality

Publications (1)

Publication Number Publication Date
CN115914544A true CN115914544A (en) 2023-04-04

Family

ID=86496670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211428544.9A Pending CN115914544A (en) 2022-11-15 2022-11-15 Intelligent detection method and system for video conference communication quality

Country Status (1)

Country Link
CN (1) CN115914544A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668737A (en) * 2023-08-02 2023-08-29 成都梵辰科技有限公司 Ultra-high definition video definition testing method and system based on deep learning
CN116760653A (en) * 2023-08-17 2023-09-15 北京博数智源人工智能科技有限公司 Intelligent operation and maintenance method and system for remote video conference

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116668737A (en) * 2023-08-02 2023-08-29 成都梵辰科技有限公司 Ultra-high definition video definition testing method and system based on deep learning
CN116668737B (en) * 2023-08-02 2023-10-20 成都梵辰科技有限公司 Ultra-high definition video definition testing method and system based on deep learning
CN116760653A (en) * 2023-08-17 2023-09-15 北京博数智源人工智能科技有限公司 Intelligent operation and maintenance method and system for remote video conference
CN116760653B (en) * 2023-08-17 2023-10-20 北京博数智源人工智能科技有限公司 Intelligent operation and maintenance method and system for remote video conference

Similar Documents

Publication Publication Date Title
CN115914544A (en) Intelligent detection method and system for video conference communication quality
CN111353413A (en) Low-missing-report-rate defect identification method for power transmission equipment
CN110675395A (en) Intelligent on-line monitoring method for power transmission line
CN109903053A (en) A kind of anti-fraud method carrying out Activity recognition based on sensing data
CN110348490A (en) A kind of soil quality prediction technique and device based on algorithm of support vector machine
CN114066848A (en) FPCA appearance defect visual inspection system
CN115131747A (en) Knowledge distillation-based power transmission channel engineering vehicle target detection method and system
CN116308958A (en) Carbon emission online detection and early warning system and method based on mobile terminal
CN112200238A (en) Hard rock tension-shear fracture identification method and device based on sound characteristics
CN113822907A (en) Image processing method and device
CN114022923A (en) Intelligent collecting and editing system
CN110580915B (en) Sound source target identification system based on wearable equipment
CN111611973A (en) Method, device and storage medium for identifying target user
CN112508946B (en) Cable tunnel anomaly detection method based on antagonistic neural network
CN112885356B (en) Voice recognition method based on voiceprint
CN112686105B (en) Fog concentration grade identification method based on video image multi-feature fusion
CN115205784A (en) Online examination monitoring method and system based on network video monitoring
CN114005054A (en) AI intelligence system of grading
CN117116280B (en) Speech data intelligent management system and method based on artificial intelligence
CN112885359B (en) Voice recognition system
CN109740858A (en) Automation aid decision-making system and method based on deep learning
CN115658887B (en) Broadcast fused media information collecting, editing and publishing management system based on cloud platform
CN116405863B (en) Stage sound equipment fault detection method and system based on data mining
CN116307934B (en) Recorded broadcast teaching course quality evaluation method and system based on cloud computing
CN111143688B (en) Evaluation method and system based on mobile news client

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination