CN117979089A - Live video processing method, device, equipment and medium - Google Patents

Live video processing method, device, equipment and medium Download PDF

Info

Publication number
CN117979089A
CN117979089A CN202410013930.4A CN202410013930A CN117979089A CN 117979089 A CN117979089 A CN 117979089A CN 202410013930 A CN202410013930 A CN 202410013930A CN 117979089 A CN117979089 A CN 117979089A
Authority
CN
China
Prior art keywords
data
live video
filter
index
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410013930.4A
Other languages
Chinese (zh)
Inventor
陈军
刘宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pinkuo Information Technology Co ltd
Original Assignee
Shenzhen Pinkuo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pinkuo Information Technology Co ltd filed Critical Shenzhen Pinkuo Information Technology Co ltd
Priority to CN202410013930.4A priority Critical patent/CN117979089A/en
Publication of CN117979089A publication Critical patent/CN117979089A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The application discloses a live video processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: designing a callback function set and a filter bank; acquiring key index data from the live video through a data acquisition callback function; a filter manager is adopted to call a filter in the filter set, and data analysis and prediction are carried out on key index data to obtain a data prediction result; comparing the real-time data of the live video with the data prediction result through a result comparison callback function to obtain a data comparison result; and based on the data comparison result, carrying out scheme adjustment on the live video through a scheme adjustment callback function to obtain the target live video. The application can improve the speed and accuracy of live video processing.

Description

Live video processing method, device, equipment and medium
Technical Field
The application belongs to the technical field of data processing, and particularly relates to a live video processing method, a live video processing device, live video processing equipment and a live video processing medium.
Background
With the development of internet technology, live video broadcast is gradually rising, and the live video broadcast greatly enriches the life of people by virtue of convenience and immersive property, and people also increasingly rely on live broadcast. Live program planning (such as live content, live style, selection planning of live time period) is a hotspot of current research, however, due to mass, diversity and real-time performance of live data, analysis and trend prediction of live data cannot be performed quickly and accurately at present. Therefore, a live video processing method is needed to improve the technical problems of slow and inaccurate live data analysis and prediction.
Disclosure of Invention
The application provides a live video processing method, a device, equipment and a medium, which effectively solve the problems of low analysis and prediction speeds and inaccuracy of live video.
In order to achieve the above object, an embodiment of the present application provides a live video processing method, including:
designing a callback function set and a filter library, wherein the callback function set comprises a data acquisition callback function, a result comparison callback function and a scheme adjustment callback function, and the filter library comprises a filter manager and a filter set;
acquiring key index data from the live video through the data acquisition callback function;
invoking a filter in the filter set by adopting the filter manager, and carrying out data analysis and prediction on the key index data to obtain a data prediction result;
comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result;
And based on the data comparison result, carrying out scheme adjustment on the live video through the scheme adjustment callback function to obtain a target live video.
Optionally, the determining process of the key index includes:
acquiring original index data of a plurality of historical live videos, and determining evolution index data of each historical live video according to the original index data;
Calculating the hotness value of each historical live video;
and determining a key index from the original index and the evolution index according to the heat value.
Optionally, the determining, according to the heat value, a key indicator from the original indicator and the evolution indicator includes:
Placing the original index and the evolution index into an index set, taking the data of each index in the index set as independent variables, taking the heat value as a dependent variable, and establishing a function curve between each index and the heat value;
Calculating the average value of each index in the index set, and calculating the median of the average value;
substituting the median as an X value into each function curve to obtain a Y value corresponding to each index, taking the minimum Y value as a reference, solving a difference value between each Y value and the reference, and taking the difference value as a thermal intensity value to obtain a first thermal intensity value corresponding to each index at the median;
taking the median as a central point, expanding preset step numbers to the left and the right of the central point according to preset step sizes, taking the numerical value corresponding to each step number as an X value, and calculating a second thermal intensity value corresponding to each index at the numerical value corresponding to each step number;
According to the first thermal intensity value and the second thermal intensity value, calculating a comprehensive thermal intensity value corresponding to each index;
and determining key indexes from the index set according to the comprehensive thermal intensity value.
Optionally, the step of calling the filter in the filter set by using the filter manager to perform data analysis and prediction on the key index data to obtain a data prediction result includes:
the filter manager identifies a data type of the key index data;
Determining a corresponding data processing flow according to the data type, and setting a filter type and the number of filters for each node in the data processing flow;
and according to the data processing flow, the filter manager calls the filters of the corresponding types and the corresponding numbers to analyze and predict the key index data, so as to obtain a data prediction result.
Optionally, the setting a filter type and a filter number for each node in the data processing flow includes:
Determining a filter type corresponding to each node according to the transaction type corresponding to the node in the data processing flow;
And determining the number of the filters corresponding to the node according to the data quantity of the key index data and the complexity of the transaction processing of the corresponding node.
Optionally, before the filter manager invokes the corresponding type and number of filters to analyze and predict the key indicator data, the method further includes:
constructing a live video prediction model, wherein the live video prediction model comprises a spatial feature extraction network, a first time sequence feature extraction network, a second time sequence feature extraction network and an output network, and specifically comprises the following steps:
taking the input end of the spatial feature extraction network as the input end of a live video prediction model, and connecting the input ends of the first time sequence feature extraction network and the second time sequence extraction network with the output end of the spatial feature extraction network;
And connecting the output ends of the first time sequence feature extraction network and the second time sequence extraction network with the input end of the output network, and taking the output end of the output network as the output end of the live video prediction model.
Optionally, the nodes in the data processing flow include a data cleaning node and a prediction analysis node, the filter types include a data cleaning filter and an analysis prediction filter, the filter manager calls a corresponding type and number of filters to analyze and predict the key index data to obtain a data prediction result, and the method includes:
after the data cleaning filter executes data cleaning operation, the filter manager calls the analysis prediction filter to input the data after data cleaning into a spatial feature extraction network of the live video prediction model to execute spatial feature extraction operation, so as to obtain spatial features;
invoking the analysis prediction filter to input the spatial features into the first time sequence feature extraction network and the second time sequence feature extraction network respectively to execute time sequence feature extraction processing to obtain first time sequence features and second time sequence features;
And calling the analysis prediction filter to input the first time sequence characteristic and the second time sequence characteristic into the output network to execute analysis prediction operation so as to obtain a prediction result.
The embodiment of the application also provides a live video processing device, which comprises:
The design module is used for designing a callback function set and a filter bank, wherein the callback function set comprises a data acquisition callback function, a result comparison callback function and a scheme adjustment callback function, and the filter bank comprises a filter manager and a filter set;
The acquisition module is used for acquiring key index data from the live video through the data acquisition callback function;
the prediction module is used for calling the filters in the filter set by adopting the filter manager, and carrying out data analysis and prediction on the key index data to obtain a data prediction result;
The comparison module is used for comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result;
and the adjustment module is used for carrying out scheme adjustment on the live video through the scheme adjustment callback function based on the data comparison result to obtain a target live video.
The embodiment of the application also provides a computer device, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the live video processing method of claim.
The embodiment of the application also provides a computer readable storage medium which stores computer instructions for causing a computer to execute the live video processing method.
The live video processing method provided by the embodiment of the application designs a callback function set and a filter bank, wherein the callback function set comprises a data acquisition callback function, a result comparison callback function and a scheme adjustment callback function, and the filter bank comprises a filter manager and a filter set; acquiring key index data from the live video through the data acquisition callback function; invoking a filter in the filter set by adopting the filter manager, and carrying out data analysis and prediction on the key index data to obtain a data prediction result; comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result; and based on the data comparison result, carrying out scheme adjustment on the live video through the scheme adjustment callback function to obtain a target live video. The key index data of the live video are collected, so that the pertinence and the effectiveness of the data are ensured, the accuracy of analysis and prediction is improved, and the data processing speed is improved because the complete data are not collected; and the filter manager calls the filters in the filter set to analyze and process the data, so that the ordered and rapid analysis and prediction of the data are ensured.
Drawings
FIG. 1 is a flow chart of a live video processing method in one embodiment;
FIG. 2 is a model block diagram of a live video prediction model in one embodiment;
FIG. 3 is a block diagram of a live video processing device in one embodiment;
Fig. 4 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. In addition, the technical features of the embodiments of the present application described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1, the embodiment of the present application provides a live video processing method, and it is understood that the method may be performed by any apparatus, device, platform, or cluster of devices having computing and processing capabilities. For example, the device may be an electronic device with computing capability, such as a notebook, a central control device, a smart phone, a terminal computer, a server, etc., and the method includes the following steps S1-S5:
S1, designing a callback function set and a filter library, wherein the callback function set comprises a data acquisition callback function, a result comparison callback function and a scheme adjustment callback function, and the filter library comprises a filter manager and a filter set.
In this embodiment, the purpose of designing the callback function set is to timely and quickly respond through the callback function, the data acquisition callback function is used for monitoring and acquiring live video data in real time, the result comparison callback function is used for quickly determining deviation between the real-time data and the predicted data, and the scheme adjustment callback function is used for timely adjusting the live scheme according to the deviation.
The filter bank is used for rapidly and accurately processing data through the filters, and comprises two types of filters: filters that process data (i.e., filters in a filter set) and filters that manage it (i.e., a filter manager).
S2, acquiring key index data from the live video through the data acquisition callback function.
In this embodiment, the data collection callback function collects data from a data source (for example, a database of a live broadcast platform or a real-time data stream) in real time, and only key index data is collected by setting rules in the data collection callback function due to huge live broadcast video data, so as to be used for analysis and prediction of live broadcast data.
The key index determination process comprises the following steps of A11-A13:
A11, acquiring original index data of a plurality of historical live videos, and determining evolution index data of each historical live video according to the original index data;
The original index data comprise live broadcasting topics, video frame contents, audience quantity, anchor basic information, audience basic information and barrage information, and the evolution index data comprise user behavior data, audience quantity change data and interaction frequency change data.
The video frame content comprises a target object, a background object, video frame colors and sizes and positions of all objects in a video frame; the anchor basic information comprises anchor gender, age, region, live broadcast style, number of fans, live broadcast category and interaction information, and the audience basic information comprises audience group, audience gender, age, region and academic.
A12, calculating the heat value of each historical live video;
in this embodiment, the calculating the popularity value of each of the historical live videos includes the following steps B11-B14:
B11, calculating audience heat index of each historical live video according to a first formula, wherein the first formula is as follows:
Wherein α i is the audience heat index of the ith historical live video, m i is the total number of the audience of the ith historical live video, n i is the average number of the audience in the unit time of the ith historical live video, u i is the number of the audience with the watching time smaller than a preset threshold in the ith historical live video, and v i is the average number of the newly added audience in the unit time of the ith historical live video;
and B12, calculating the interactive heat index of each historical live video according to a second formula, wherein the second formula is as follows:
Wherein, β i is the interactive heat index of the ith historical live video, p i is the total number of article vocabulary in the barrage information of the ith historical live video, t i is the average number of article vocabulary in the barrage information of the ith historical live video in unit time, r i is the average number of article vocabulary in the barrage information of the ith historical live video in unit time, q i is the total number of emotion vocabulary in the barrage information of the ith historical live video, h i is the average number of emotion vocabulary in the barrage information of the ith historical live video in unit time, and k i is the average number of emotion vocabulary in the barrage information of the ith historical live video in unit time;
In this embodiment, the article vocabulary is a directional vocabulary, for example, eyes of sports wear, white, round collar, thickened, and the like, which reflects the attention degree of the audience to the articles; emotion vocabulary is sensory vocabulary, for example, like, fine, nice and the like, and shows the preference degree of audience.
B13, calculating the group heat index of each historical live video according to a third formula, wherein the third formula is as follows:
Wherein gamma i is the group popularity index of the ith historical live video, d i is the total number of audience groups of the ith historical live video, s i is the average number of audience groups in the unit time of the ith historical live video, and z i is the newly added average number of groups in the unit time of the ith historical live video;
The groups can be classified according to various classification methods such as age, region, occupation, hobbies and the like, the same user can be classified into a plurality of groups, for example, the user A can be classified into a student group and a basketball lover group.
And B14, inputting the audience heat index, the interaction heat index and the group heat index into a heat value calculation model to obtain the heat value of each historical live video.
In this embodiment, the heat value calculation model is a regression analysis model.
A13, determining key indexes from the original indexes and the evolution indexes according to the heat value.
The key index is determined from the original index and the evolution index according to the heat value, and the key index comprises the following steps of C11-C16:
C11, placing the original index and the evolution index into an index set, taking the data of each index in the index set as independent variables, taking the heat value as a dependent variable, and establishing a function curve between each index and the heat value;
the function curve may be a cubic curve function.
C12, calculating the average value of each index in the index set, and calculating the median of the average value;
In this embodiment, before calculating the average value of each index, non-numeric data is converted into numeric data, for example, the learning data may be converted from text data into numeric data by using ordered coding, and primary, middle, high school and college are respectively coded into 1,2,3 and 4.
For each index, at least one index value can be extracted from each historical live video, and after N historical live videos are extracted, the average value of each index can be obtained by averaging.
Since the orders of magnitude of the different indices may be different, the median of the average value of each index is preferentially found in the present embodiment, and the subsequent processing is performed based on the median.
C13, substituting the median as an X value into each function curve to obtain a Y value corresponding to each index, taking the minimum Y value as a reference, solving a difference value between each Y value and the reference, and taking the difference value as a thermal intensity value to obtain a first thermal intensity value corresponding to each index at the median;
Substituting the median as the X value into a function curve corresponding to a certain index, wherein the obtained Y value is the heat value of the index at the median.
C14, taking the median as a central point, expanding preset step numbers to the left and right of the central point according to preset step sizes, taking the numerical value corresponding to each step number as an X value, and calculating a second thermal intensity value corresponding to each index at the numerical value corresponding to each step number;
For example, if the median is 40, the step size is 5, and the number of steps is 3, the first step number corresponds to values of 35 and 45, the second step number corresponds to values of 30 and 50, and the third step number corresponds to values of 25 and 55.
C15, calculating a comprehensive thermal intensity value corresponding to each index according to the first thermal intensity value and the second thermal intensity value;
In this embodiment, the first thermal intensity value and the second thermal intensity value of each index are weighted and summed to obtain a comprehensive thermal intensity value corresponding to each index, where the weight corresponding to the first thermal intensity value is high, the weight of the second thermal intensity value is determined according to the distance between the value corresponding to the step number and the center point, and the weight of the second thermal intensity value corresponding to the step number closer to the center point is higher.
And C16, determining key indexes from the index set according to the comprehensive thermal intensity value.
And taking the index corresponding to the comprehensive thermal intensity value which is ranked at the front as a key index according to the sequence of the comprehensive thermal intensity value from high to low.
S3, calling a filter in the filter set by adopting the filter manager, and carrying out data analysis and prediction on the key index data to obtain a data prediction result;
In this embodiment, the filters in the filter processor set may process the data in parallel, and different filters may perform the same operation or may perform different operations, and the filter manager is configured to process the filters in the filter processor set.
The filter manager is used for calling the filters in the filter set, and carrying out data analysis and prediction on the key index data to obtain a data prediction result, wherein the method comprises the following steps of:
D11, the filter manager identifies the data type of the key index data;
the data types of the key index data include video frame data, audio data, user behavior data, bullet screen data, and the like.
D12, determining a corresponding data processing flow according to the data type, and setting a filter type and the number of filters for each node in the data processing flow;
For example, the data processing flow corresponding to the video data is: image transcoding- > data cleaning- > data structuring processing- > data characterization processing- > result prediction; the corresponding data processing flow of barrage data is: text segmentation- > data cleaning- > data characterization processing- > result prediction.
The setting of the filter type and the filter number for each node in the data processing flow comprises the following steps of E11-E12:
E11, determining a filter type corresponding to each node according to the transaction type corresponding to each node in the data processing flow;
for example, a text segmentation node, whose filter type is a segmentation filter; and the data cleaning node is provided with a filter type of a data cleaning filter.
And E12, determining the number of the filters corresponding to the node according to the data quantity of the key index data and the complexity of transaction processing of the corresponding node.
If the barrage data is more, a plurality of word segmentation filters can be arranged for parallel word segmentation, for example, 5 word segmentation filters can be arranged. If the complexity of the data cleaning is higher than the complexity of the segmentation, the number of data cleaning filters may be higher than 5.
When the same node adopts a plurality of filters, the filter manager needs to determine the position of the data in the flow node through the data offset, so as to accurately align the data and ensure the integrity and the accuracy of the data.
And D13, according to the data processing flow, the filter manager calls the filters of the corresponding types and the corresponding numbers to analyze and predict the key index data, so as to obtain a data prediction result.
Before the filter manager invokes a corresponding type and number of filters to analyze and predict the key indicator data, the method further comprises:
Constructing a live video prediction model, wherein the live video prediction model comprises a spatial feature extraction network, a first time sequence feature extraction network, a second time sequence feature extraction network and an output network, and specifically comprises the following steps of F11-F12:
f11, taking the input end of the spatial feature extraction network as the input end of a live video prediction model, and connecting the input ends of the first time sequence feature extraction network and the second time sequence extraction network with the output end of the spatial feature extraction network;
As shown in fig. 2, a model structure diagram of a live video prediction model in one embodiment is shown.
In this embodiment, the spatial feature extraction network may be a convolutional neural network, which is very good at extracting spatial features from a video or an image, and by setting the spatial feature extraction network, image features of a live video may be accurately extracted.
The first time sequence extraction network can be a long-period memory network, the second time sequence extraction network can be a gating circulation unit network, the long-period memory network and the gating circulation unit network are good at processing time sequence data, time dependence in the data is understood, and time sequence characteristics of the data can be extracted more accurately by adopting the two time sequence extraction networks.
And F12, connecting the output ends of the first time sequence feature extraction network and the second time sequence extraction network with the input end of the output network, and taking the output end of the output network as the output end of the live video prediction model.
The output network may be a random forest or XGboost network for predicting the trend of live trending topics, audience number changes.
The nodes in the data processing flow comprise data cleaning nodes and prediction analysis nodes, the filter types comprise data cleaning filters and analysis prediction filters, the filter manager calls the corresponding types and numbers of filters to analyze and predict the key index data to obtain data prediction results, and the method comprises the following steps of:
G11, after the data cleaning filter executes data cleaning operation, the filter manager calls the analysis prediction filter to input the data after data cleaning into a spatial feature extraction network of the live video prediction model to execute spatial feature extraction operation, so as to obtain spatial features;
g12, calling the analysis prediction filter to input the spatial features into the first time sequence feature extraction network and the second time sequence feature extraction network respectively to execute time sequence feature extraction processing to obtain first time sequence features and second time sequence features;
and G13, calling the analysis and prediction filter to input the first time sequence characteristic and the second time sequence characteristic into the output network to execute analysis and prediction operation, so as to obtain a prediction result.
And S4, comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result.
The prediction result can be a change trend of audience quantity of the live video and a popularity trend of live content, and can be applied to actual business decisions, such as live content planning, advertisement delivery strategies, user interaction optimization and the like. The result comparison callback function is adopted to quickly know whether deviation exists between actual data and predicted data, and the size and the direction of the deviation are what, so that a subsequent strategy can be determined.
And S5, based on the data comparison result, carrying out scheme adjustment on the live video through the scheme adjustment callback function to obtain a target live video.
The scheme adjustment callback function is used for determining whether scheme adjustment is needed to be performed on the live video according to the data comparison result, and how to adjust, for example, modifying live content or adjusting a live schedule according to the size and direction of the comparison result deviation.
Through the scheme of live video, actual measurement is as follows:
1) The data processing delay time is that the real-time data processing and analysis response time is less than 2 seconds;
2) The accuracy of the prediction model is 92.8%;
3) User engagement improvement rate, namely audience engagement improvement rate is more than 10%.
The live video scheme is arranged in the live video platform of the stings give a course, wherein the stings give a course serve over 1 ten thousand institutions nationwide, an online passenger acquisition tool is provided for artistic training institutions, tools such as live video, recorded video and delivery are provided, and particularly online training of pianos, vocal music and the like brings great commercial success.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a device for realizing the live video processing method. The implementation scheme of the solution provided by the device is similar to the implementation scheme described in the above method, so the specific limitation in the embodiments of the live video processing device provided below can be referred to the limitation of the live video processing method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 3, a schematic block diagram of a live video processing apparatus 30 according to an embodiment of the present application includes: the device comprises a design module 31, an acquisition module 32, a prediction module 33, a comparison module 34 and an adjustment module 35, wherein:
The design module 31 is configured to design a callback function set and a filter bank, where the callback function set includes a data collection callback function, a result comparison callback function, and a scheme adjustment callback function, and the filter bank includes a filter manager and a filter set.
And the acquisition module 32 is used for acquiring key index data from the live video through the data acquisition callback function.
The key index determination process comprises the following steps of A21-A23:
a21, acquiring original index data of a plurality of historical live videos, and determining evolution index data of each historical live video according to the original index data;
a22, calculating the heat value of each historical live video;
The calculating of the hotness value of each historical live video comprises the following steps of:
b21, calculating the audience heat index of each historical live video according to a first formula, wherein the first formula is as follows:
Wherein α i is the audience heat index of the ith historical live video, m i is the total number of the audience of the ith historical live video, n i is the average number of the audience in the unit time of the ith historical live video, u i is the number of the audience with the watching time smaller than a preset threshold in the ith historical live video, and v i is the average number of the newly added audience in the unit time of the ith historical live video;
b22, calculating the interactive heat index of each historical live video according to a second formula, wherein the second formula is as follows:
Wherein, β i is the interactive heat index of the ith historical live video, p i is the total number of article vocabulary in the barrage information of the ith historical live video, t i is the average number of article vocabulary in the barrage information of the ith historical live video in unit time, r i is the average number of article vocabulary in the barrage information of the ith historical live video in unit time, q i is the total number of emotion vocabulary in the barrage information of the ith historical live video, h i is the average number of emotion vocabulary in the barrage information of the ith historical live video in unit time, and k i is the average number of emotion vocabulary in the barrage information of the ith historical live video in unit time;
B23, calculating the group heat index of each historical live video according to a third formula, wherein the third formula is as follows:
Wherein gamma i is the group popularity index of the ith historical live video, d i is the total number of audience groups of the ith historical live video, s i is the average number of audience groups in the unit time of the ith historical live video, and z i is the newly added average number of groups in the unit time of the ith historical live video;
and B24, inputting the audience heat index, the interaction heat index and the group heat index into a heat value calculation model to obtain the heat value of each historical live video.
A23, determining key indexes from the original indexes and the evolution indexes according to the heat value.
The key index is determined from the original index and the evolution index according to the heat value, and the key index comprises the following steps of C21-C26:
c21, putting the original index and the evolution index into an index set, taking the data of each index in the index set as independent variables, taking the heat value as a dependent variable, and establishing a function curve between each index and the heat value;
C22, calculating the average value of each index in the index set, and calculating the median of the average value;
C23, substituting the median as an X value into each function curve to obtain a Y value corresponding to each index, taking the minimum Y value as a reference, solving a difference value between each Y value and the reference, and taking the difference value as a thermal intensity value to obtain a first thermal intensity value corresponding to each index at the median;
c24, taking the median as a central point, expanding preset step numbers to the left and right of the central point according to preset step sizes, taking the numerical value corresponding to each step number as an X value, and calculating a second thermal intensity value corresponding to each index at the numerical value corresponding to each step number;
c25, calculating a comprehensive thermal intensity value corresponding to each index according to the first thermal intensity value and the second thermal intensity value;
And C26, determining key indexes from the index set according to the comprehensive thermal intensity value.
The prediction module 33 is configured to invoke a filter in the filter set by using the filter manager, perform data analysis and prediction on the key index data, and obtain a data prediction result;
The filter manager is used for calling the filters in the filter set, carrying out data analysis and prediction on the key index data to obtain a data prediction result, and the method comprises the following steps of:
D21, the filter manager identifies the data type of the key index data;
D22, determining a corresponding data processing flow according to the data type, and setting a filter type and the number of filters for each node in the data processing flow;
the setting of the filter type and the filter number for each node in the data processing flow comprises the following steps of E21-E22:
e21, determining a filter type corresponding to each node according to the transaction type corresponding to each node in the data processing flow;
and E22, determining the number of the filters corresponding to the node according to the data quantity of the key index data and the complexity of transaction processing of the corresponding node.
And D23, according to the data processing flow, the filter manager calls the filters of the corresponding types and the corresponding numbers to analyze and predict the key index data, so as to obtain a data prediction result.
Before the filter manager invokes a corresponding type and number of filters to analyze and predict the key indicator data, the method further comprises:
constructing a live video prediction model, wherein the live video prediction model comprises a spatial feature extraction network, a first time sequence feature extraction network, a second time sequence feature extraction network and an output network, and specifically comprises the following steps F21-F22:
f21, using the input end of the spatial feature extraction network as the input end of a live video prediction model, and connecting the input ends of the first time sequence feature extraction network and the second time sequence extraction network with the output end of the spatial feature extraction network;
And F22, connecting the output ends of the first time sequence feature extraction network and the second time sequence extraction network with the input end of the output network, and taking the output end of the output network as the output end of the live video prediction model.
The nodes in the data processing flow comprise data cleaning nodes and prediction analysis nodes, the filter types comprise data cleaning filters and analysis prediction filters, the filter manager calls the corresponding types and numbers of filters to analyze and predict the key index data to obtain data prediction results, and the method comprises the following steps of:
G21, after the data cleaning filter executes data cleaning operation, the filter manager calls the analysis prediction filter to input the data after data cleaning into a spatial feature extraction network of the live video prediction model to execute spatial feature extraction operation, so as to obtain spatial features;
G22, calling the analysis prediction filter to input the spatial features into the first time sequence feature extraction network and the second time sequence feature extraction network respectively to execute time sequence feature extraction processing to obtain first time sequence features and second time sequence features;
And G23, calling the analysis and prediction filter to input the first time sequence characteristic and the second time sequence characteristic into the output network to execute analysis and prediction operation, so as to obtain a prediction result.
And the comparison module 34 is used for comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result.
And the adjustment module 35 is configured to perform scheme adjustment on the live video through the scheme adjustment callback function based on the data comparison result, so as to obtain a target live video.
The specific implementation manner of each embodiment of the analysis device is basically the same as that of each embodiment of the method, and is not described herein.
The modules in the live video processing device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an embodiment, there is further provided a computer device, which is the edge network node mentioned in the above embodiment of the method, and the internal structure diagram thereof may be shown in fig. 4. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface.
The processor of the computer device is used to provide computing and control capabilities, and may be, but not limited to, a general purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, and the like. The processor may include one or more processors, including for example one or more central processing units (central processing unit, CPU), which in the case of a CPU, may be a single-core CPU or a multi-core CPU. The processor may also include one or more special purpose processors, which may include GPUs, FPGAs, etc., for acceleration processing. The processor is used to call the program code and data in the memory to perform the steps of the method embodiments described above. Reference may be made specifically to the description of the method embodiments, and no further description is given here.
The memory of the computer device includes, but is not limited to, non-volatile storage media and internal memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media.
The input/output interface of the computer device is used to exchange information between the processor and the external device.
The communication interface of the computer device is used for communicating with an external terminal through a network connection.
The computer program is executed by a processor to implement a live video processing method.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the division of units/modules is merely a logical function division, and there may be another division manner when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. The coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a read-only memory (ROM), or a random-access memory (random access memory, RAM), or a magnetic medium such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium such as a digital versatile disk (DIGITAL VERSATILEDISC, DVD), or a semiconductor medium such as a Solid State Disk (SSD), or the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any equivalent modifications or substitutions will be apparent to those skilled in the art within the scope of the present application, and are intended to be included within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. A live video processing method, the method comprising:
designing a callback function set and a filter library, wherein the callback function set comprises a data acquisition callback function, a result comparison callback function and a scheme adjustment callback function, and the filter library comprises a filter manager and a filter set;
acquiring key index data from the live video through the data acquisition callback function;
invoking a filter in the filter set by adopting the filter manager, and carrying out data analysis and prediction on the key index data to obtain a data prediction result;
comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result;
And based on the data comparison result, carrying out scheme adjustment on the live video through the scheme adjustment callback function to obtain a target live video.
2. The method according to claim 1, wherein the determining of the key indicator comprises:
acquiring original index data of a plurality of historical live videos, and determining evolution index data of each historical live video according to the original index data;
Calculating the hotness value of each historical live video;
and determining a key index from the original index and the evolution index according to the heat value.
3. The method of claim 2, wherein determining key metrics from the original metrics and evolution metrics based on the heat value comprises:
Placing the original index and the evolution index into an index set, taking the data of each index in the index set as independent variables, taking the heat value as a dependent variable, and establishing a function curve between each index and the heat value;
Calculating the average value of each index in the index set, and calculating the median of the average value;
substituting the median as an X value into each function curve to obtain a Y value corresponding to each index, taking the minimum Y value as a reference, solving a difference value between each Y value and the reference, and taking the difference value as a thermal intensity value to obtain a first thermal intensity value corresponding to each index at the median;
taking the median as a central point, expanding preset step numbers to the left and the right of the central point according to preset step sizes, taking the numerical value corresponding to each step number as an X value, and calculating a second thermal intensity value corresponding to each index at the numerical value corresponding to each step number;
According to the first thermal intensity value and the second thermal intensity value, calculating a comprehensive thermal intensity value corresponding to each index;
and determining key indexes from the index set according to the comprehensive thermal intensity value.
4. The method of claim 1, wherein the invoking the filters in the filter set with the filter manager performs data analysis and prediction on the key indicator data to obtain a data prediction result, comprising:
the filter manager identifies a data type of the key index data;
Determining a corresponding data processing flow according to the data type, and setting a filter type and the number of filters for each node in the data processing flow;
and according to the data processing flow, the filter manager calls the filters of the corresponding types and the corresponding numbers to analyze and predict the key index data, so as to obtain a data prediction result.
5. The method of claim 4, wherein the setting the filter type and the number of filters for each node in the data processing flow comprises:
Determining a filter type corresponding to each node according to the transaction type corresponding to the node in the data processing flow;
And determining the number of the filters corresponding to the node according to the data quantity of the key index data and the complexity of the transaction processing of the corresponding node.
6. The method of claim 4, wherein before the filter manager invokes a corresponding type and number of filters to analyze and predict the key metric data, the method further comprises:
constructing a live video prediction model, wherein the live video prediction model comprises a spatial feature extraction network, a first time sequence feature extraction network, a second time sequence feature extraction network and an output network, and specifically comprises the following steps:
taking the input end of the spatial feature extraction network as the input end of a live video prediction model, and connecting the input ends of the first time sequence feature extraction network and the second time sequence extraction network with the output end of the spatial feature extraction network;
And connecting the output ends of the first time sequence feature extraction network and the second time sequence extraction network with the input end of the output network, and taking the output end of the output network as the output end of the live video prediction model.
7. The method of claim 6, wherein the nodes in the data processing flow include data cleaning nodes and prediction analysis nodes, the filter types include data cleaning filters and analysis prediction filters, the filter manager calls a corresponding type and number of filters to analyze and predict the key index data to obtain a data prediction result, and the method comprises the steps of:
after the data cleaning filter executes data cleaning operation, the filter manager calls the analysis prediction filter to input the data after data cleaning into a spatial feature extraction network of the live video prediction model to execute spatial feature extraction operation, so as to obtain spatial features;
invoking the analysis prediction filter to input the spatial features into the first time sequence feature extraction network and the second time sequence feature extraction network respectively to execute time sequence feature extraction processing to obtain first time sequence features and second time sequence features;
And calling the analysis prediction filter to input the first time sequence characteristic and the second time sequence characteristic into the output network to execute analysis prediction operation so as to obtain a prediction result.
8. A live video processing apparatus, the apparatus comprising:
The design module is used for designing a callback function set and a filter bank, wherein the callback function set comprises a data acquisition callback function, a result comparison callback function and a scheme adjustment callback function, and the filter bank comprises a filter manager and a filter set;
The acquisition module is used for acquiring key index data from the live video through the data acquisition callback function;
the prediction module is used for calling the filters in the filter set by adopting the filter manager, and carrying out data analysis and prediction on the key index data to obtain a data prediction result;
The comparison module is used for comparing the real-time data of the live video with the data prediction result through the result comparison callback function to obtain a data comparison result;
and the adjustment module is used for carrying out scheme adjustment on the live video through the scheme adjustment callback function based on the data comparison result to obtain a target live video.
9. A computer device, the computer device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the live video processing method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a computer to perform the live video processing method of any one of claims 1-7.
CN202410013930.4A 2024-01-02 2024-01-02 Live video processing method, device, equipment and medium Pending CN117979089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410013930.4A CN117979089A (en) 2024-01-02 2024-01-02 Live video processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410013930.4A CN117979089A (en) 2024-01-02 2024-01-02 Live video processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117979089A true CN117979089A (en) 2024-05-03

Family

ID=90850754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410013930.4A Pending CN117979089A (en) 2024-01-02 2024-01-02 Live video processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117979089A (en)

Similar Documents

Publication Publication Date Title
JP6484730B2 (en) Collaborative filtering method, apparatus, server, and storage medium for fusing time factors
CN108549957B (en) Internet topic trend auxiliary prediction method and system and information data processing terminal
CN107358247B (en) Method and device for determining lost user
CN110795657A (en) Article pushing and model training method and device, storage medium and computer equipment
CN112749330B (en) Information pushing method, device, computer equipment and storage medium
CN113177700B (en) Risk assessment method, system, electronic equipment and storage medium
CN111460384A (en) Policy evaluation method, device and equipment
CN113934851A (en) Data enhancement method and device for text classification and electronic equipment
CN112988840A (en) Time series prediction method, device, equipment and storage medium
CN113569162A (en) Data processing method, device, equipment and storage medium
CN115827257A (en) CPU capacity prediction method and system for processor system
CN112989182A (en) Information processing method, information processing apparatus, information processing device, and storage medium
CN112257959A (en) User risk prediction method and device, electronic equipment and storage medium
CN112905885B (en) Method, apparatus, device, medium and program product for recommending resources to user
CN112906824B (en) Vehicle clustering method, system, device and storage medium
CN117979089A (en) Live video processing method, device, equipment and medium
CN115129945A (en) Graph structure contrast learning method, equipment and computer storage medium
CN111177493B (en) Data processing method, device, server and storage medium
CN117786234B (en) Multimode resource recommendation method based on two-stage comparison learning
CN116501993B (en) House source data recommendation method and device
CN116562359B (en) CTR prediction model training method and device based on contrast learning and electronic equipment
CN116089722B (en) Implementation method, device, computing equipment and storage medium based on graph yield label
CN113162780B (en) Real-time network congestion analysis method, device, computer equipment and storage medium
US20220398433A1 (en) Efficient Cross-Platform Serving of Deep Neural Networks for Low Latency Applications
CN116992300A (en) Method, device, equipment and storage medium for detecting interpretable social robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination