[ summary of the invention ]
In view of this, embodiments of the present invention provide a data processing method and system based on a block chain.
In a first aspect, an embodiment of the present invention provides a data processing method based on a block chain, where the method includes:
s1, the server acquires advertisement video data to be launched, preprocesses the video advertisement video data to generate an advertisement feature extraction task and releases the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
s2, the node acquires a subtask from the block chain according to a preset constraint rule, the node generates a text feature bullet screen by processing the text advertisement feature extraction task, generates a voice feature bullet screen by processing the voice advertisement feature extraction task, and generates an image feature bullet screen by processing the image advertisement feature extraction task;
s3, the server acquires live video data, wherein the live video data comprise video data, barrage data, user identification and user characteristic data, and the server classifies the users through preset classification rules to generate advertisement barrage output strategies for different types of users;
s4, the server calculates live broadcast activity score H, if the live broadcast activity score H is larger than a threshold value, the next abrupt change frame of the live broadcast video is predicted through a prediction model, and if the prediction time interval is larger than the bullet screen display time, advertisement bullet screen output is conducted on different users according to an advertisement bullet screen insertion strategy.
As for the foregoing aspect and any possible implementation manner, there is further provided an implementation manner, where the preprocessing the video advertisement data by the S1 specifically includes:
s11, the server obtains the advertisement video data, and the audio separation is carried out on the advertisement video data to obtain the advertisement video and the advertisement audio;
and S12, performing frame extraction processing on the advertisement video to obtain an advertisement video frame image.
As to the above-mentioned aspect and any possible implementation manner, an implementation manner is further provided, where the generating of the text feature barrage by processing the text advertisement feature extraction task specifically includes:
s21, extracting subtitles of the advertisement video frame image;
s22, judging the repetition degree of the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the repetition times larger than a time threshold;
s23, if the repetition times are not more than the times threshold, respectively carrying out characteristic value Q judgment on the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the characteristic values more than the preset threshold, wherein the characteristic value Q is calculated through the following formula:
f
1is the frequency of occurrence of the selected caption in the extracted caption, f
0Is the frequency of occurrence of the selected caption in the standard corpus;
the step of processing the voice advertisement feature extraction task to generate the voice feature bullet screen specifically comprises the following steps:
s24, converting the advertisement audio into characters;
s25, judging the repetition degree of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the repetition times larger than the time threshold;
s26, if the repetition times is not more than the times threshold, judging a characteristic value Q of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the characteristic value more than the preset threshold, wherein the characteristic value Q is calculated by the following formula:
f
1is the frequency of occurrence of the selected text in the converted text, f
0Is the frequency of occurrence of the selected word in the standard corpus;
the step of processing the image advertisement feature extraction task to generate the image feature bullet screen specifically comprises the following steps:
s27, acquiring an advertisement video frame image set;
s28, acquiring a gray image of the advertisement video frame image, and performing edge detection on the gray image by using a Prewitt edge detection operator to generate an object contour image of the advertisement video frame image;
s29, carrying out binarization processing on the object outline image to generate a binarized image, and carrying out morphological closed operation processing to generate a closed operation processing image of the advertisement video frame image;
s30, acquiring a plurality of initial curves of the closed arithmetic processing image;
s31, substituting the information of the initial curve into a total energy functional E, wherein the formula of the total energy functional E is as follows:
where f denotes the image intensity, a and b denote spatial variables, Ω denotes the interior of the initial curve,
represents the outer portion of the initial curve, χ represents a local circle centered at b;
s32, solving the minimum value of the total energy functional E through a steepest descent method to obtain an evolution equation of a level set function;
s33, continuously iterating the evolution equation by a finite difference method until the level set function reaches a stable state, and selecting points on a zero level set to form an object contour;
s34, extracting the object outline in the advertisement video frame image set, judging whether the objects are the same according to the object outline, selecting the same objects with the quantity larger than the quantity threshold value to segment the object image, and generating an image characteristic bullet screen based on the object image.
As to the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the S21 specifically includes:
s211, preprocessing each advertisement video frame image to extract a caption delivery area, wherein the preprocessing comprises the following steps:
performing brightness conversion on each advertisement video frame image by the following formula:
Pm(x,y)=0.3·CR(x,y)+0.59·CG(x,y)+0.11·CB(x,y);
wherein R, G and B are a red component, a green component, a blue component, Pm(x, y) are luminance images, x and y are pixel locations in each image;
then, noise processing is carried out through the following formula to obtain a denoised image Pn(x,y):
Then, an extracted image is created by the following formula:
P1(x,y)=Pn(x,y)-Pn(x-1,y+1)
P2(x,y)=Pn(x,y)-Pn(x,y+1)
P3(x,y)=Pn(x,y)-Pn(x+1,y+1)
P4(x,y)=Pn(x,y)-Pn(x-1,y)
P5(x,y)=Pn(x,y)-Pn(x+1,y)
P6(x,y)=Pn(x,y)-Pn(x-1,y-1)
P7(x,y)=Pn(x,y)-Pn(x,y-1)
P8(x,y)=Pn(x,y)-Pn(x+1,y-1)
then, vertically projecting the extracted image to a caption string to obtain a caption putting area;
s212, judging whether the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image or not frame by frame, and if the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image and the absolute value of the pixel value difference of the current advertisement video frame image and the previous advertisement video frame image is smaller than a specific value, classifying the current advertisement video frame image and the previous advertisement video frame image into;
s213, calculating the simplification score of each advertisement video frame image in the same caption frame group by the following formula:
wherein K is a simplification score, f
iThe pixel value of the ith sampling point in the advertisement video frame image is obtained, and m is the total number of the sampling points of the advertisement video frame image;
s214, extracting the image of the subtitle release area of the advertisement video frame image with the lowest simplification score, removing the background, and then performing character recognition to obtain a subtitle text.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the constraint rule specifically includes:
according to the number M of nodes0Setting distribution frequency threshold value M for character advertisement characteristic extraction task1Setting distribution time threshold M for voice advertisement characteristic extraction task2Setting distribution time threshold M for picture advertisement characteristic extraction task3Wherein M is1<M2<M3And M1+M2+M3=M0;
The node randomly acquires one subtask, and if the distributed times of the corresponding subtask reach a threshold value, the node selects to acquire other subtasks;
after the node finishes any subtask, broadcasting in the block chain, updating the distribution frequency threshold of other subtasks in proportion, finishing the task node to randomly obtain one other subtask, and if the distribution frequency of the other subtasks reaches the threshold, the node selects to obtain the rest subtasks;
and after the node finishes the two subtasks, broadcasting in the block chain, updating the distribution frequency threshold of the remaining subtasks, and finishing the task node to acquire the remaining subtasks until the remaining subtasks are finished.
The above-mentioned aspects and any possible implementation manners further provide an implementation manner, where the server classifies the user according to a preset classification rule, specifically including:
the user characteristic data comprises user registration information, user bullet screen historical data and user payment data, and the server performs primary classification on the users according to the user registration information and divides the users into new users and old users;
performing secondary classification on old users according to the user payment data, and dividing the old users into member users and non-member users;
calculating a user activity score G according to the bullet screen data and the user bullet screen historical data, performing three-level classification on the non-member users by comparing with an activity score threshold value, and dividing the non-member users into non-member high-activity users and non-member low-activity users, wherein the calculation formula of the user activity score G is as follows:
wherein x is1Number of the bullet screens in the bullet screen data, y1Number of speech barrages, z, in barrage data1Number of speech barrages, x, in barrage data2The number y of character barrages in the user barrage historical data2For the number of voice barrages, z, in the user barrage history data2For the number of picture barrages in the user barrage history data, a1The number of repeated character bullet screens in bullet screen data, b1The number of repetitions of the speech bullet screen in the bullet screen data, c1For the number of repetitions of the bullet screen data picture words, a2For the number of repetitions of the text bullet screen in the user bullet screen history data, b2For the number of repetitions of the speech barrage in the user barrage history data, c2The number of image barrage repetitions in the user barrage historical data, A is the weight of the character barrage, B is the weight of the voice barrage, C is the weight of the picture barrage, W1Is a first correction parameter, W2As a second correction parameter, W3Is a first correction parameter, W4Is a fourth correction parameter;
the advertisement bullet screen output strategy specifically comprises the following steps:
outputting a character characteristic bullet screen, a voice characteristic bullet screen and an image characteristic bullet screen for a new user;
for member users, advertisement bullet screen output is not performed;
outputting a text characteristic bullet screen and a voice characteristic bullet screen aiming at the non-member high-activity users;
and outputting a text characteristic bullet screen and an image characteristic bullet screen for users with non-meeting low liveness.
The above-described aspect and any possible implementation manner further provide an implementation manner that the calculation formula of the live activity score H is as follows:
wherein, α(t)Number of users who have issued barrages for time t, gamma(t)Total number of users at time t, GγIs the activity score of the gamma user, T0(t)Total live broadcast duration at time T, T1(t)The total length of the anchor silence at time t.
The foregoing aspects and any possible implementations further provide an implementation, where predicting a next abrupt change frame of a live video through a prediction model specifically includes:
the histogram of the DC image of the video frame of the video data is acquired and normalized according to the following formula, and is divided into training data and test data according to the proportion,
wherein,
histogram of DC image of real video frame corresponding to ith time sequence after normalization processing, x
iA histogram of a DC image of a real video frame corresponding to the ith time sequence is obtained, u is an average value of the histograms of the DC images of the real video frames, and sigma is a standard deviation;
constructing an LSTM neural network based on a prediction model, and training through training data;
predicting the abrupt change state of the video frame at the t moment through the trained LSTM neural network so as to complete the prediction of the next abrupt change frame;
wherein the LSTM neural network comprises an input layer, an LSTM cell layer and an output layer; the LSTM cell layer is internally provided with a plurality of thresholds including a forgetting gate f(t)And input gate i(t)And an output gate o(t)(ii) a And LSTM neural netThe process of forward propagation of the network at each sequence index position is:
updating the forget gate output:
f(t)=σ(W(fx)x(t)+W(fh)h(t-1)+b(f));
update input gate two part output:
i(t)=σ(W(ix)x(t)+W(ih)h(t-1)+b(i)),g(t)=tanh(W(gx)x(t)+W(gh)h(t-1)+b(g));
and (3) updating the cell state:
updating output gate output:
o(t)=σ(W(ox)x(t)+W(oh)h(t-1)+b(o)),
an incoming time attention mechanism is introduced:
the loss function for the LSTM neural network is defined as follows:
where σ denotes the sigma function, ⊙ denotes the Hadamard product, W
(fx)、W
(fh)、W
(ix)、W
(ih)、W
(gx)、W
(gh)、W
(ox)、W
(oh)、W
(cc)、W
(ch)Represents a weight, b
(f)、b
(i)、b
(g)、b
(o)、
The offset is represented by the number of bits in the bit,
cell status at time t, h
(t)For hidden state at time t, N is the number of training samples, y
tFor the real mutation information at the time t,
the mutation information is predicted for time t,
by passing
Calculation of W
(s)Represents a weight, b
(s)Denotes the offset, T
(n)The number of positions selected for the training sample is predicted for the nth mutation.
The above-described aspects and any possible implementation further provide an implementation in which the loss function is augmented for continuous learning by the following formula:
where i is a neural network parameter, θiFor neural network parameter sets, θA,iIs the previous task weight, LB(theta) is the latter task loss function, lambda is the discount factor, FiIs a Fisher information matrix.
In a second aspect, an embodiment of the present invention provides a data processing system based on a block chain, where the system includes:
a server, the server comprising:
the receiving unit is used for acquiring advertisement video data to be launched;
the processing unit is used for preprocessing video advertisement video data, acquiring video live broadcast data, classifying users through a deep learning model according to the barrage data, the user identifications and the user characteristic data, and generating advertisement barrage output strategies aiming at different types of users; the system comprises a prediction model, a video module, a bullet screen display module, a video module, a display module and a display module, wherein the prediction model is used for calculating a live broadcast activity score H according to the video data and the bullet screen data, predicting a next key frame through the prediction model if the live broadcast activity score H is larger than a threshold value, and outputting an advertisement bullet screen to different users according to an advertisement bullet screen insertion strategy if a prediction time interval is larger than bullet screen display time;
the publishing unit is used for publishing the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
the anchor end is used for live video, sending and receiving the barrage and performing data interaction with the server;
the client is used for watching the live video, sending and receiving the barrage and performing data interaction with the server;
a plurality of nodes, the nodes comprising:
the acquisition module is used for acquiring a subtask from the block chain according to a preset constraint rule;
the processing module is used for processing the text advertisement feature extraction task to generate a text feature barrage, processing the voice advertisement feature extraction task to generate a voice feature barrage, and processing the image advertisement feature extraction task to generate an image feature barrage;
the sending module is used for sending the character characteristic barrage, the voice characteristic barrage and the image characteristic barrage to the server;
a blockchain, the blockchain comprising:
the storage layer is used for recording the node data and the server data;
the interaction layer is used for carrying out data interaction with the nodes and the server;
the processing layer is used for the nodes to reach consensus and generating, trading and recording the reward blocks based on the constraint layer;
and the constraint layer is used for establishing a block chain constraint rule.
One of the above technical solutions has the following beneficial effects:
the method comprises the steps of firstly obtaining advertisement video data to be launched, generating an advertisement characteristic extraction task to be issued into a block chain, processing and generating a character characteristic barrage, a voice characteristic barrage and an image characteristic barrage, then generating advertisement barrage output strategies aiming at different users, finally predicting the next mutation frame of a direct broadcast video through a prediction model when a direct broadcast active score H is larger than a threshold value, and outputting the advertisement barrage of different users according to an advertisement barrage inter-cut strategy if a prediction time interval is larger than barrage display time. According to the embodiment of the invention, the advertisement video is inserted into the live video in the form of the bullet screen, so that the normal watching of the live video by a user is not interfered, and the advertisement can be more easily watched by the user through the bullet screen, so that the advertisement putting effect for the live video is better.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the following embodiments and accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1, which is a flowchart illustrating a block chain-based data processing method according to an embodiment of the present invention, wherein the method includes the following steps:
s1, the server acquires advertisement video data to be launched, preprocesses the video advertisement video data to generate an advertisement feature extraction task and releases the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
s2, the node acquires a subtask from the block chain according to a preset constraint rule, the node generates a text feature bullet screen by processing the text advertisement feature extraction task, generates a voice feature bullet screen by processing the voice advertisement feature extraction task, and generates an image feature bullet screen by processing the image advertisement feature extraction task;
s3, the server acquires live video data, wherein the live video data comprise video data, barrage data, user identification and user characteristic data, and the server classifies the users through preset classification rules to generate advertisement barrage output strategies for different types of users;
s4, the server calculates live broadcast activity score H, if the live broadcast activity score H is larger than a threshold value, the next abrupt change frame of the live broadcast video is predicted through a prediction model, and if the prediction time interval is larger than the bullet screen display time, advertisement bullet screen output is conducted on different users according to an advertisement bullet screen insertion strategy.
The method comprises the steps of firstly obtaining advertisement video data to be launched, generating an advertisement characteristic extraction task to be issued into a block chain, processing and generating a character characteristic barrage, a voice characteristic barrage and an image characteristic barrage, then generating advertisement barrage output strategies aiming at different users, finally predicting the next mutation frame of a direct broadcast video through a prediction model when a direct broadcast active score H is larger than a threshold value, and outputting the advertisement barrage of different users according to an advertisement barrage inter-cut strategy if a prediction time interval is larger than barrage display time. According to the embodiment of the invention, the advertisement video is converted into the live video which is inserted into the live video in a bullet screen mode, so that the normal watching of the live video by a user is not interfered, and the advertisement enables the content to be more easily noticed by the user to be browsed through the bullet screen, so that the advertisement putting effect for the live video is better; the multi-task synchronous processing is carried out through the block chain, so that the task processing efficiency is improved, the redundancy generated by the server is reduced, and the server burden is reduced; classifying users through preset classification rules to generate advertisement bullet screen output strategies aiming at different types of users, so as to realize optimal advertisement putting effect; and meanwhile, predicting the next mutation frame of the live video through the prediction model can predict whether video mutation exists in the scene of the next section of live video, thereby avoiding interference of wonderful live content due to advertisement delivery.
Referring to fig. 2, the step of preprocessing the video advertisement data by S1 specifically includes:
s11, the server obtains the advertisement video data, and the audio separation is carried out on the advertisement video data to obtain the advertisement video and the advertisement audio;
and S12, performing frame extraction processing on the advertisement video to obtain an advertisement video frame image.
Further, fig. 3 is a schematic flow diagram of generating a text feature bullet screen according to an embodiment of the present invention, and referring to fig. 3, the generating of the text feature bullet screen by processing the text advertisement feature extraction task specifically includes:
s21, extracting subtitles of the advertisement video frame image;
s22, judging the repetition degree of the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the repetition times larger than a time threshold;
s23, if the repetition times are not more than the times threshold, respectively carrying out characteristic value Q judgment on the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the characteristic values more than the preset threshold, wherein the characteristic value Q is calculated through the following formula:
f
1is the frequency of occurrence of the selected caption in the extracted caption, f
0Is the frequency with which the selected caption appears in the standard corpus.
According to the embodiment of the invention, the repeated degree of the extracted subtitles is judged, and if subtitles with the repeated times larger than the time threshold exist, the highlighted subtitles are the key advertising words or key advertising words, so that the selected subtitles with the repeated times larger than the time threshold can be generated into the character characteristic bullet screen. And if the repetition times are not more than the time threshold, respectively judging the characteristic value Q of the extracted subtitles, and screening advertising words or advertising words through the characteristic value Q so as to convert the subtitles into character characteristic bulletin screens. By the method, the accuracy of extracting the advertising words or the advertising words from the subtitles is improved.
Fig. 4 is a schematic flow chart of S21 according to an embodiment of the present invention, and referring to fig. 4, the S21 specifically includes:
s211, preprocessing each advertisement video frame image to extract a caption delivery area, wherein the preprocessing comprises the following steps:
performing brightness conversion on each advertisement video frame image by the following formula:
Pm(x,y)=0.3·CR(x,y)+0.59·CG(x,y)+0.11·CB(x,y);
wherein R, G and B are a red component, a green component, a blue component, Pm(x, y) are luminance images, x and y are pixel locations in each image;
then, noise processing is carried out through the following formula to obtain a denoised image Pn(x,y):
Then, an extracted image is created by the following formula:
P1(x,y)=Pn(x,y)-Pn(x-1,y+1)
P2(x,y)=Pn(x,y)-Pn(x,y+1)
P3(x,y)=Pn(x,y)-Pn(x+1,y+1)
P4(x,y)=Pn(x,y)-Pn(x-1,y)
P5(x,y)=Pn(x,y)-Pn(x+1,y)
P6(x,y)=Pn(x,y)-Pn(x-1,y-1)
P7(x,y)=Pn(x,y)-Pn(x,y-1)
P8(x,y)=Pn(x,y)-Pn(x+1,y-1)
the invention creates an extracted image, namely a direction edge image, by a Roberts edge extraction method;
then, vertically projecting the extracted image to a caption string to obtain a caption putting area;
s212, judging whether the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image or not frame by frame, and if the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image and the absolute value of the pixel value difference of the current advertisement video frame image and the previous advertisement video frame image is smaller than a specific value, classifying the current advertisement video frame image and the previous advertisement video frame image into;
s213, calculating the simplification score of each advertisement video frame image in the same caption frame group by the following formula:
wherein K is a simplification score, f
iThe pixel value of the ith sampling point in the advertisement video frame image is obtained, and m is the total number of the sampling points of the advertisement video frame image;
s214, extracting the image of the subtitle release area of the advertisement video frame image with the lowest simplification score, removing the background, and then performing character recognition to obtain a subtitle text.
According to the embodiment of the invention, the subtitles of the advertisement video frame image are extracted by the method to obtain the subtitle release area, the advertisement video frame image is grouped based on the subtitle release area, and the image of the subtitle release area of the advertisement video frame image with the lowest simplified score in the same subtitle frame group is extracted, so that a large amount of operations are avoided, and therefore, the subtitle extraction is efficient and the accuracy is high.
Further, fig. 5 is a schematic flow diagram of generating a speech feature barrage according to an embodiment of the present invention, and referring to fig. 5, the generating a speech feature barrage by processing a speech advertisement feature extraction task specifically includes:
s24, converting the advertisement audio into characters;
s25, judging the repetition degree of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the repetition times larger than the time threshold;
s26, if the repetition times is not more than the times threshold, judging a characteristic value Q of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the characteristic value more than the preset threshold, wherein the characteristic value Q is calculated by the following formula:
f
1is the frequency of occurrence of the selected text in the converted text, f
0Is the frequency of occurrence of the selected word in the standard corpus.
Further, fig. 6 is a schematic flow chart of generating an image feature bullet screen according to an embodiment of the present invention, and referring to fig. 6, the generating an image feature bullet screen by processing an image advertisement feature extraction task specifically includes:
s27, acquiring an advertisement video frame image set;
s28, acquiring a gray image of the advertisement video frame image, and performing edge detection on the gray image by using a Prewitt edge detection operator to generate an object contour image of the advertisement video frame image;
s29, carrying out binarization processing on the object outline image to generate a binarized image, and carrying out morphological closed operation processing to generate a closed operation processing image of the advertisement video frame image;
s30, acquiring a plurality of initial curves of the closed arithmetic processing image;
s31, substituting the information of the initial curve into a total energy functional E, wherein the formula of the total energy functional E is as follows:
AIl(b)=∫a∈Ωχ(a,b)dA,AEl(b)=∫a∈Ω=χ(a,b)dA,
where f denotes the image intensity, a and b denote spatial variables, Ω denotes the interior of the initial curve,
represents the outer portion of the initial curve, χ represents a local circle centered at b;
s32, solving the minimum value of the total energy functional E through a steepest descent method to obtain an evolution equation of a level set function;
s33, continuously iterating the evolution equation by a finite difference method until the level set function reaches a stable state, and selecting points on a zero level set to form an object contour;
s34, extracting the object outline in the advertisement video frame image set, judging whether the objects are the same according to the object outline, selecting the same objects with the quantity larger than the quantity threshold value to segment the object image, and generating an image characteristic bullet screen based on the object image.
The method extracts the object outline in the advertisement video frame image set, is efficient and accurate, judges whether the objects are the same according to the object outline, and selects the same objects with the quantity larger than the quantity threshold value to segment the object image, so that the advertisement objects are accurately extracted and used for generating the image characteristic bullet screen.
The constraint rule of the invention specifically comprises:
according to the number M of nodes0Setting distribution frequency threshold value M for character advertisement characteristic extraction task1Setting distribution time threshold M for voice advertisement characteristic extraction task2Setting distribution time threshold M for picture advertisement characteristic extraction task3Wherein M is1<M2<M3And M1+M2+M3=M0;
The node randomly acquires one subtask, and if the distributed times of the corresponding subtask reach a threshold value, the node selects to acquire other subtasks;
after the node finishes any subtask, broadcasting in the block chain, updating the distribution time threshold of other subtasks according to the distribution time threshold proportion, finishing the task node to randomly obtain one other subtask, and if the distribution time of the other subtask reaches the threshold, the node selects to obtain the rest subtask;
and after the node finishes the two subtasks, broadcasting in the block chain, updating the distribution frequency threshold of the remaining subtasks, and finishing the task node to acquire the remaining subtasks until the remaining subtasks are finished.
In the embodiment of the invention, the block chain nodes carry out multi-task synchronous processing by the method and different distribution thresholds are set for different tasks, so that different numbers of nodes are correspondingly distributed for processing according to the difficulty of the tasks, the multi-task is completed as close to synchronization as possible, and after any subtask is completed, other unfinished tasks are redistributed, the computing power of each node is fully exerted, and the task completion efficiency is improved.
Fig. 8 is a schematic flow chart of user classification and barrage output provided in an embodiment of the present invention, and referring to fig. 8, the server classifies users according to a preset classification rule, which specifically includes:
the user characteristic data comprises user registration information, user bullet screen historical data and user payment data, and the server performs primary classification on the users according to the user registration information and divides the users into new users and old users;
performing secondary classification on old users according to the user payment data, and dividing the old users into member users and non-member users;
calculating a user activity score G according to the bullet screen data and the user bullet screen historical data, performing three-level classification on the non-member users by comparing with an activity score threshold value, and dividing the non-member users into non-member high-activity users and non-member low-activity users, wherein the calculation formula of the user activity score G is as follows:
wherein x is1Number of the bullet screens in the bullet screen data, y1Number of speech barrages, z, in barrage data1To be shotNumber of speech barrages, x, in the barrage data2The number y of character barrages in the user barrage historical data2For the number of voice barrages, z, in the user barrage history data2For the number of picture barrages in the user barrage history data, a1The number of repeated character bullet screens in bullet screen data, b1The number of repetitions of the speech bullet screen in the bullet screen data, c1For the number of repetitions of the bullet screen data picture words, a2For the number of repetitions of the text bullet screen in the user bullet screen history data, b2For the number of repetitions of the speech barrage in the user barrage history data, c2The number of image barrage repetitions in the user barrage historical data, A is the weight of the character barrage, B is the weight of the voice barrage, C is the weight of the picture barrage, W1Is a first correction parameter, W2As a second correction parameter, W3Is a first correction parameter, W4Is a fourth correction parameter;
the advertisement bullet screen output strategy specifically comprises the following steps:
outputting a character characteristic bullet screen, a voice characteristic bullet screen and an image characteristic bullet screen for a new user;
for member users, advertisement bullet screen output is not performed;
outputting a text characteristic bullet screen and a voice characteristic bullet screen aiming at the non-member high-activity users;
and outputting a text characteristic bullet screen and an image characteristic bullet screen for users with non-meeting low liveness.
According to the embodiment of the invention, the users are classified through the preset classification rules, and the advertisement bullet screen output strategies aiming at different users are generated according to the classification results, so that the maximum advertisement putting effect is realized. For a new user, outputting a character characteristic barrage, a voice characteristic barrage and an image characteristic barrage, wherein the maximum advertisement exposure degree is realized no matter the user watches an unobtrusive character barrage or an obvious image barrage or clicks the voice barrage with curiosity; aiming at the member user, the advertisement barrage output is not carried out, so that the better experience of the member user is realized; aiming at users with non-meeting low liveness, outputting a character characteristic bullet screen and an image characteristic bullet screen, so that the advertisement can be exposed more directly; to the high liveness user of non-member, carry out characters characteristic barrage and speech characteristic barrage output, because the high liveness user of non-member can be based on between anchor and the spectator, spectator and spectator's interchange between the spectator, can initiatively browse characters characteristic barrage and speech characteristic barrage, and the frequency is higher moreover, consequently, can realize the advertisement through characters characteristic barrage and speech characteristic barrage and fully expose, avoid moreover because of the image barrage experience exists influence barrage and exchange.
It should be noted that, the calculation formula of the live activity score H is as follows:
wherein, α(t)Number of users who have issued barrages for time t, gamma(t)Total number of users at time t, GγIs the activity score of the gamma user, T0(t)Total live broadcast duration at time T, T1(t)The total length of the anchor silence at time t.
Fig. 9 is a schematic flow chart of abrupt frame prediction provided in an embodiment of the present invention, and referring to fig. 9, the predicting a next abrupt frame of a live video by using a prediction model specifically includes:
the histogram of the DC image of the video frame of the video data is acquired and normalized according to the following formula, and is divided into training data and test data according to the proportion,
wherein,
histogram of DC image of real video frame corresponding to ith time sequence after normalization processing, x
iA histogram of a DC image of a real video frame corresponding to the ith time sequence is obtained, u is an average value of the histograms of the DC images of the real video frames, and sigma is a standard deviation;
constructing an LSTM neural network based on a prediction model, and training through training data;
predicting the mutation state of the video frame at the time t through the trained LSTM neural network, and determining the mutation state as a mutation frame when the mutation state meets the requirement, so as to complete the prediction of the next mutation frame;
wherein the LSTM neural network comprises an input layer, an LSTM cell layer and an output layer; the LSTM cell layer is internally provided with a plurality of thresholds including a forgetting gate f(t)And input gate i(t)And an output gate o(t)(ii) a And the process of forward propagation of the LSTM neural network at each sequence index position is:
updating the forget gate output:
f(t)=σ(W(fx)x(t)+W(fh)h(t-1)+b(f));
update input gate two part output:
i(t)=σ(W(ix)x(t)+W(ih)h(t-1)+b(i)),g(t)=tanh(W(gx)x(t)+W(gh)h(t-1)+b(g));
and (3) updating the cell state:
updating output gate output:
o(t)=σ(W(ox)x(t)+W(oh)h(t-1)+b(o)),
an incoming time attention mechanism is introduced:
the loss function for the LSTM neural network is defined as follows:
where σ denotes the sigma function, ⊙ denotes the Hadamard product, W
(fx)、W
(fh)、W
(ix)、W
(ih)、W
(gx)、W
(gh)、W
(ox)、W
(oh)、W
(cc)、W
(ch)Represents a weight, b
(f)、b
(i)、b
(g)、b
(o)、
The offset is represented by the number of bits in the bit,
cell status at time t, h
(t)For hidden state at time t, N is the number of training samples, y
tFor the real mutation information at the time t,
the mutation information is predicted for time t,
by passing
Calculation of W
(s)Represents a weight, b
(s)Denotes the offset, T
(n)The number of positions selected for the training sample is predicted for the nth mutation.
In addition, the present invention extends the loss function for continuous learning by the following formula:
where i is a neural network parameter, θiFor neural network parameter sets, θA,iIs the previous task weight, LB(theta) is the latter task loss function, lambda is the discount factor, FiIs a Fisher information matrix.
According to the embodiment of the invention, the next mutation frame of the live video is predicted by the prediction model based on the time sequence, the previous residual information can be fully utilized to improve the prediction performance, the prediction is accurate, and the model robustness is good.
The embodiment of the invention further provides an embodiment of a device for realizing the steps and the method in the embodiment of the method.
Please refer to fig. 10, which is a block chain-based data processing system according to an embodiment of the present invention, the system including: the system comprises a server, a main broadcasting end, a user end, a plurality of nodes and a block chain.
Fig. 11 is a schematic block diagram of a server according to an embodiment of the present invention, and referring to fig. 11, the server includes:
the receiving unit is used for acquiring advertisement video data to be launched;
the processing unit is used for preprocessing video advertisement video data, acquiring video live broadcast data, classifying users through a deep learning model according to the barrage data, the user identifications and the user characteristic data, and generating advertisement barrage output strategies aiming at different types of users; the system comprises a prediction model, a video module, a bullet screen display module, a video module, a display module and a display module, wherein the prediction model is used for calculating a live broadcast activity score H according to the video data and the bullet screen data, predicting a next key frame through the prediction model if the live broadcast activity score H is larger than a threshold value, and outputting an advertisement bullet screen to different users according to an advertisement bullet screen insertion strategy if a prediction time interval is larger than bullet screen display time;
the publishing unit is used for publishing the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
the anchor end is used for live video, sending and receiving the barrage and performing data interaction with the server;
the client is used for watching the live video, sending and receiving the barrage and performing data interaction with the server;
fig. 12 is a schematic block diagram of a node according to an embodiment of the present invention, and referring to fig. 12, the node includes:
the acquisition module is used for acquiring a subtask from the block chain according to a preset constraint rule;
the processing module is used for processing the text advertisement feature extraction task to generate a text feature barrage, processing the voice advertisement feature extraction task to generate a voice feature barrage, and processing the image advertisement feature extraction task to generate an image feature barrage;
and the sending module is used for sending the character characteristic bullet screen, the voice characteristic bullet screen and the image characteristic bullet screen to the server.
Fig. 13 is a system architecture diagram of a blockchain according to an embodiment of the present invention, and referring to fig. 13, the blockchain includes:
the storage layer is used for recording the node data and the server data;
the interaction layer is used for carrying out data interaction with the nodes and the server;
the processing layer is used for the nodes to reach consensus and generating, trading and recording the reward blocks based on the constraint layer;
and the constraint layer is used for establishing a block chain constraint rule.
Since each unit module in the embodiment can execute the method shown in fig. 1, reference may be made to the related description of fig. 1 for a part of the embodiment that is not described in detail. FIG. 14 is a hardware schematic of a system according to an embodiment of the invention. Referring to fig. 14, at a hardware level, the system includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the system may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 14, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
In a possible implementation manner, the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs the computer program, and the corresponding computer program can also be acquired from other equipment so as to form the corresponding apparatus on a logic level. And the processor executes the program stored in the memory so as to realize the advertisement insertion method provided by any embodiment of the invention through the executed program.
Embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a system comprising a plurality of application programs, enable the system to perform the advertisement insertion method provided in any of the embodiments of the present invention.
The method performed by the system according to the embodiment of the present invention may be implemented in or by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by a system including a plurality of application programs, enable the system to perform the system operation method provided in any of the embodiments of the present invention.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer.
For convenience of description, the above devices are described as being divided into various units or modules by function, respectively. Of course, the functionality of the units or modules may be implemented in the same one or more software and/or hardware when implementing the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.