CN111754267A - Data processing method and system based on block chain - Google Patents

Data processing method and system based on block chain Download PDF

Info

Publication number
CN111754267A
CN111754267A CN202010603381.8A CN202010603381A CN111754267A CN 111754267 A CN111754267 A CN 111754267A CN 202010603381 A CN202010603381 A CN 202010603381A CN 111754267 A CN111754267 A CN 111754267A
Authority
CN
China
Prior art keywords
advertisement
bullet screen
image
data
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010603381.8A
Other languages
Chinese (zh)
Other versions
CN111754267B (en
Inventor
邢文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dtct Data Technology Co ltd
Original Assignee
Bengbu Keruida Machinery Design Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bengbu Keruida Machinery Design Co ltd filed Critical Bengbu Keruida Machinery Design Co ltd
Priority to CN202010603381.8A priority Critical patent/CN111754267B/en
Publication of CN111754267A publication Critical patent/CN111754267A/en
Application granted granted Critical
Publication of CN111754267B publication Critical patent/CN111754267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/635Overlay text, e.g. embedded captions in a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a data processing method and a system based on a block chain, wherein the method comprises the steps of acquiring advertisement video data to be launched, generating an advertisement characteristic extraction task, issuing the advertisement characteristic extraction task to the block chain, and processing and generating a character characteristic bullet screen, a voice characteristic bullet screen and an image characteristic bullet screen; generating advertisement bullet screen output strategies aiming at different types of users; and calculating a live broadcast activity score H, if the live broadcast activity score H is larger than a threshold value, predicting a next abrupt change frame of the live broadcast video through a prediction model, and if the prediction time interval is larger than the bullet screen display time, outputting the advertisement bullet screen of different users according to an advertisement bullet screen insertion strategy. According to the method and the device, the advertisement video is inserted into the live video in the form of the bullet screen, so that the normal watching of the live video by the user is not interfered, and the advertisement can be browsed by the content which is more easily noticed by the user through the bullet screen, so that the advertisement putting effect on the live video is better.

Description

Data processing method and system based on block chain
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of block chain technology, and in particular, to a data processing method and system based on a block chain.
[ background of the invention ]
At present, the video live broadcast industry is developing the fire explosion. The live video broadcast creates strong scene sense through the real, real-time and interactive communication mode of the anchor and audiences, attracts the eyeballs of the audiences, and achieves the propagation effect of impressive and lasting memory. Therefore, the video live broadcast industry has huge business opportunities, and the advertisement putting is an important subject facing the video live broadcast industry.
The advertisement delivery form is single in current live video, carries out advertisement video broadcast through the bullet window, and the interference user normally watches the live, arouses user's reaction easily, and the user is rarely watched advertisement content moreover, and advertisement video effect is not good, and advertisement delivery effect is relatively poor.
[ summary of the invention ]
In view of this, embodiments of the present invention provide a data processing method and system based on a block chain.
In a first aspect, an embodiment of the present invention provides a data processing method based on a block chain, where the method includes:
s1, the server acquires advertisement video data to be launched, preprocesses the video advertisement video data to generate an advertisement feature extraction task and releases the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
s2, the node acquires a subtask from the block chain according to a preset constraint rule, the node generates a text feature bullet screen by processing the text advertisement feature extraction task, generates a voice feature bullet screen by processing the voice advertisement feature extraction task, and generates an image feature bullet screen by processing the image advertisement feature extraction task;
s3, the server acquires live video data, wherein the live video data comprise video data, barrage data, user identification and user characteristic data, and the server classifies the users through preset classification rules to generate advertisement barrage output strategies for different types of users;
s4, the server calculates live broadcast activity score H, if the live broadcast activity score H is larger than a threshold value, the next abrupt change frame of the live broadcast video is predicted through a prediction model, and if the prediction time interval is larger than the bullet screen display time, advertisement bullet screen output is conducted on different users according to an advertisement bullet screen insertion strategy.
As for the foregoing aspect and any possible implementation manner, there is further provided an implementation manner, where the preprocessing the video advertisement data by the S1 specifically includes:
s11, the server obtains the advertisement video data, and the audio separation is carried out on the advertisement video data to obtain the advertisement video and the advertisement audio;
and S12, performing frame extraction processing on the advertisement video to obtain an advertisement video frame image.
As to the above-mentioned aspect and any possible implementation manner, an implementation manner is further provided, where the generating of the text feature barrage by processing the text advertisement feature extraction task specifically includes:
s21, extracting subtitles of the advertisement video frame image;
s22, judging the repetition degree of the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the repetition times larger than a time threshold;
s23, if the repetition times are not more than the times threshold, respectively carrying out characteristic value Q judgment on the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the characteristic values more than the preset threshold, wherein the characteristic value Q is calculated through the following formula:
Figure BDA0002559947640000021
f1is the frequency of occurrence of the selected caption in the extracted caption, f0Is the frequency of occurrence of the selected caption in the standard corpus;
the step of processing the voice advertisement feature extraction task to generate the voice feature bullet screen specifically comprises the following steps:
s24, converting the advertisement audio into characters;
s25, judging the repetition degree of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the repetition times larger than the time threshold;
s26, if the repetition times is not more than the times threshold, judging a characteristic value Q of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the characteristic value more than the preset threshold, wherein the characteristic value Q is calculated by the following formula:
Figure BDA0002559947640000031
f1is the frequency of occurrence of the selected text in the converted text, f0Is the frequency of occurrence of the selected word in the standard corpus;
the step of processing the image advertisement feature extraction task to generate the image feature bullet screen specifically comprises the following steps:
s27, acquiring an advertisement video frame image set;
s28, acquiring a gray image of the advertisement video frame image, and performing edge detection on the gray image by using a Prewitt edge detection operator to generate an object contour image of the advertisement video frame image;
s29, carrying out binarization processing on the object outline image to generate a binarized image, and carrying out morphological closed operation processing to generate a closed operation processing image of the advertisement video frame image;
s30, acquiring a plurality of initial curves of the closed arithmetic processing image;
s31, substituting the information of the initial curve into a total energy functional E, wherein the formula of the total energy functional E is as follows:
Figure BDA0002559947640000032
Figure BDA0002559947640000033
Figure BDA0002559947640000034
where f denotes the image intensity, a and b denote spatial variables, Ω denotes the interior of the initial curve,
Figure BDA0002559947640000035
represents the outer portion of the initial curve, χ represents a local circle centered at b;
s32, solving the minimum value of the total energy functional E through a steepest descent method to obtain an evolution equation of a level set function;
s33, continuously iterating the evolution equation by a finite difference method until the level set function reaches a stable state, and selecting points on a zero level set to form an object contour;
s34, extracting the object outline in the advertisement video frame image set, judging whether the objects are the same according to the object outline, selecting the same objects with the quantity larger than the quantity threshold value to segment the object image, and generating an image characteristic bullet screen based on the object image.
As to the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the S21 specifically includes:
s211, preprocessing each advertisement video frame image to extract a caption delivery area, wherein the preprocessing comprises the following steps:
performing brightness conversion on each advertisement video frame image by the following formula:
Pm(x,y)=0.3·CR(x,y)+0.59·CG(x,y)+0.11·CB(x,y);
wherein R, G and B are a red component, a green component, a blue component, Pm(x, y) are luminance images, x and y are pixel locations in each image;
then, noise processing is carried out through the following formula to obtain a denoised image Pn(x,y):
Figure BDA0002559947640000041
Then, an extracted image is created by the following formula:
P1(x,y)=Pn(x,y)-Pn(x-1,y+1)
P2(x,y)=Pn(x,y)-Pn(x,y+1)
P3(x,y)=Pn(x,y)-Pn(x+1,y+1)
P4(x,y)=Pn(x,y)-Pn(x-1,y)
P5(x,y)=Pn(x,y)-Pn(x+1,y)
P6(x,y)=Pn(x,y)-Pn(x-1,y-1)
P7(x,y)=Pn(x,y)-Pn(x,y-1)
P8(x,y)=Pn(x,y)-Pn(x+1,y-1)
then, vertically projecting the extracted image to a caption string to obtain a caption putting area;
s212, judging whether the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image or not frame by frame, and if the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image and the absolute value of the pixel value difference of the current advertisement video frame image and the previous advertisement video frame image is smaller than a specific value, classifying the current advertisement video frame image and the previous advertisement video frame image into;
s213, calculating the simplification score of each advertisement video frame image in the same caption frame group by the following formula:
Figure BDA0002559947640000051
wherein K is a simplification score, fiThe pixel value of the ith sampling point in the advertisement video frame image is obtained, and m is the total number of the sampling points of the advertisement video frame image;
s214, extracting the image of the subtitle release area of the advertisement video frame image with the lowest simplification score, removing the background, and then performing character recognition to obtain a subtitle text.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the constraint rule specifically includes:
according to the number M of nodes0Setting distribution frequency threshold value M for character advertisement characteristic extraction task1Setting distribution time threshold M for voice advertisement characteristic extraction task2Setting distribution time threshold M for picture advertisement characteristic extraction task3Wherein M is1<M2<M3And M1+M2+M3=M0
The node randomly acquires one subtask, and if the distributed times of the corresponding subtask reach a threshold value, the node selects to acquire other subtasks;
after the node finishes any subtask, broadcasting in the block chain, updating the distribution frequency threshold of other subtasks in proportion, finishing the task node to randomly obtain one other subtask, and if the distribution frequency of the other subtasks reaches the threshold, the node selects to obtain the rest subtasks;
and after the node finishes the two subtasks, broadcasting in the block chain, updating the distribution frequency threshold of the remaining subtasks, and finishing the task node to acquire the remaining subtasks until the remaining subtasks are finished.
The above-mentioned aspects and any possible implementation manners further provide an implementation manner, where the server classifies the user according to a preset classification rule, specifically including:
the user characteristic data comprises user registration information, user bullet screen historical data and user payment data, and the server performs primary classification on the users according to the user registration information and divides the users into new users and old users;
performing secondary classification on old users according to the user payment data, and dividing the old users into member users and non-member users;
calculating a user activity score G according to the bullet screen data and the user bullet screen historical data, performing three-level classification on the non-member users by comparing with an activity score threshold value, and dividing the non-member users into non-member high-activity users and non-member low-activity users, wherein the calculation formula of the user activity score G is as follows:
Figure BDA0002559947640000061
wherein x is1Number of the bullet screens in the bullet screen data, y1Number of speech barrages, z, in barrage data1Number of speech barrages, x, in barrage data2The number y of character barrages in the user barrage historical data2For the number of voice barrages, z, in the user barrage history data2For the number of picture barrages in the user barrage history data, a1The number of repeated character bullet screens in bullet screen data, b1The number of repetitions of the speech bullet screen in the bullet screen data, c1For the number of repetitions of the bullet screen data picture words, a2For the number of repetitions of the text bullet screen in the user bullet screen history data, b2For the number of repetitions of the speech barrage in the user barrage history data, c2The number of image barrage repetitions in the user barrage historical data, A is the weight of the character barrage, B is the weight of the voice barrage, C is the weight of the picture barrage, W1Is a first correction parameter, W2As a second correction parameter, W3Is a first correction parameter, W4Is a fourth correction parameter;
the advertisement bullet screen output strategy specifically comprises the following steps:
outputting a character characteristic bullet screen, a voice characteristic bullet screen and an image characteristic bullet screen for a new user;
for member users, advertisement bullet screen output is not performed;
outputting a text characteristic bullet screen and a voice characteristic bullet screen aiming at the non-member high-activity users;
and outputting a text characteristic bullet screen and an image characteristic bullet screen for users with non-meeting low liveness.
The above-described aspect and any possible implementation manner further provide an implementation manner that the calculation formula of the live activity score H is as follows:
Figure BDA0002559947640000062
wherein, α(t)Number of users who have issued barrages for time t, gamma(t)Total number of users at time t, GγIs the activity score of the gamma user, T0(t)Total live broadcast duration at time T, T1(t)The total length of the anchor silence at time t.
The foregoing aspects and any possible implementations further provide an implementation, where predicting a next abrupt change frame of a live video through a prediction model specifically includes:
the histogram of the DC image of the video frame of the video data is acquired and normalized according to the following formula, and is divided into training data and test data according to the proportion,
Figure BDA0002559947640000071
wherein,
Figure BDA0002559947640000072
histogram of DC image of real video frame corresponding to ith time sequence after normalization processing, xiA histogram of a DC image of a real video frame corresponding to the ith time sequence is obtained, u is an average value of the histograms of the DC images of the real video frames, and sigma is a standard deviation;
constructing an LSTM neural network based on a prediction model, and training through training data;
predicting the abrupt change state of the video frame at the t moment through the trained LSTM neural network so as to complete the prediction of the next abrupt change frame;
wherein the LSTM neural network comprises an input layer, an LSTM cell layer and an output layer; the LSTM cell layer is internally provided with a plurality of thresholds including a forgetting gate f(t)And input gate i(t)And an output gate o(t)(ii) a And LSTM neural netThe process of forward propagation of the network at each sequence index position is:
updating the forget gate output:
f(t)=σ(W(fx)x(t)+W(fh)h(t-1)+b(f));
update input gate two part output:
i(t)=σ(W(ix)x(t)+W(ih)h(t-1)+b(i)),g(t)=tanh(W(gx)x(t)+W(gh)h(t-1)+b(g));
and (3) updating the cell state:
Figure BDA0002559947640000073
updating output gate output:
o(t)=σ(W(ox)x(t)+W(oh)h(t-1)+b(o)),
Figure BDA0002559947640000074
an incoming time attention mechanism is introduced:
Figure BDA0002559947640000081
the loss function for the LSTM neural network is defined as follows:
Figure BDA0002559947640000082
where σ denotes the sigma function, ⊙ denotes the Hadamard product, W(fx)、W(fh)、W(ix)、W(ih)、W(gx)、W(gh)、W(ox)、W(oh)、W(cc)、W(ch)Represents a weight, b(f)、b(i)、b(g)、b(o)
Figure BDA0002559947640000083
The offset is represented by the number of bits in the bit,
Figure BDA0002559947640000084
cell status at time t, h(t)For hidden state at time t, N is the number of training samples, ytFor the real mutation information at the time t,
Figure BDA0002559947640000085
the mutation information is predicted for time t,
Figure BDA0002559947640000086
by passing
Figure BDA0002559947640000087
Calculation of W(s)Represents a weight, b(s)Denotes the offset, T(n)The number of positions selected for the training sample is predicted for the nth mutation.
The above-described aspects and any possible implementation further provide an implementation in which the loss function is augmented for continuous learning by the following formula:
Figure BDA0002559947640000088
where i is a neural network parameter, θiFor neural network parameter sets, θA,iIs the previous task weight, LB(theta) is the latter task loss function, lambda is the discount factor, FiIs a Fisher information matrix.
In a second aspect, an embodiment of the present invention provides a data processing system based on a block chain, where the system includes:
a server, the server comprising:
the receiving unit is used for acquiring advertisement video data to be launched;
the processing unit is used for preprocessing video advertisement video data, acquiring video live broadcast data, classifying users through a deep learning model according to the barrage data, the user identifications and the user characteristic data, and generating advertisement barrage output strategies aiming at different types of users; the system comprises a prediction model, a video module, a bullet screen display module, a video module, a display module and a display module, wherein the prediction model is used for calculating a live broadcast activity score H according to the video data and the bullet screen data, predicting a next key frame through the prediction model if the live broadcast activity score H is larger than a threshold value, and outputting an advertisement bullet screen to different users according to an advertisement bullet screen insertion strategy if a prediction time interval is larger than bullet screen display time;
the publishing unit is used for publishing the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
the anchor end is used for live video, sending and receiving the barrage and performing data interaction with the server;
the client is used for watching the live video, sending and receiving the barrage and performing data interaction with the server;
a plurality of nodes, the nodes comprising:
the acquisition module is used for acquiring a subtask from the block chain according to a preset constraint rule;
the processing module is used for processing the text advertisement feature extraction task to generate a text feature barrage, processing the voice advertisement feature extraction task to generate a voice feature barrage, and processing the image advertisement feature extraction task to generate an image feature barrage;
the sending module is used for sending the character characteristic barrage, the voice characteristic barrage and the image characteristic barrage to the server;
a blockchain, the blockchain comprising:
the storage layer is used for recording the node data and the server data;
the interaction layer is used for carrying out data interaction with the nodes and the server;
the processing layer is used for the nodes to reach consensus and generating, trading and recording the reward blocks based on the constraint layer;
and the constraint layer is used for establishing a block chain constraint rule.
One of the above technical solutions has the following beneficial effects:
the method comprises the steps of firstly obtaining advertisement video data to be launched, generating an advertisement characteristic extraction task to be issued into a block chain, processing and generating a character characteristic barrage, a voice characteristic barrage and an image characteristic barrage, then generating advertisement barrage output strategies aiming at different users, finally predicting the next mutation frame of a direct broadcast video through a prediction model when a direct broadcast active score H is larger than a threshold value, and outputting the advertisement barrage of different users according to an advertisement barrage inter-cut strategy if a prediction time interval is larger than barrage display time. According to the embodiment of the invention, the advertisement video is inserted into the live video in the form of the bullet screen, so that the normal watching of the live video by a user is not interfered, and the advertisement can be more easily watched by the user through the bullet screen, so that the advertisement putting effect for the live video is better.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flowchart of a data processing method based on a block chain according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a pre-treatment process according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of generating a text characteristic bullet screen according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of S21 according to the embodiment of the present invention;
fig. 5 is a schematic flow chart of generating a speech feature bullet screen according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of generating an image characteristic bullet screen according to an embodiment of the present invention;
FIG. 7 is a flow chart illustrating constraint rules provided by an embodiment of the present invention;
fig. 8 is a schematic flowchart of user classification and barrage output according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a method for abrupt frame prediction according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a system according to an embodiment of the present invention;
FIG. 11 is a block diagram of a server according to an embodiment of the present invention;
fig. 12 is a block diagram of a node according to an embodiment of the present invention;
FIG. 13 is a system architecture diagram of a blockchain according to an embodiment of the present invention;
fig. 14 is a hardware diagram of a system according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the following embodiments and accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1, which is a flowchart illustrating a block chain-based data processing method according to an embodiment of the present invention, wherein the method includes the following steps:
s1, the server acquires advertisement video data to be launched, preprocesses the video advertisement video data to generate an advertisement feature extraction task and releases the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
s2, the node acquires a subtask from the block chain according to a preset constraint rule, the node generates a text feature bullet screen by processing the text advertisement feature extraction task, generates a voice feature bullet screen by processing the voice advertisement feature extraction task, and generates an image feature bullet screen by processing the image advertisement feature extraction task;
s3, the server acquires live video data, wherein the live video data comprise video data, barrage data, user identification and user characteristic data, and the server classifies the users through preset classification rules to generate advertisement barrage output strategies for different types of users;
s4, the server calculates live broadcast activity score H, if the live broadcast activity score H is larger than a threshold value, the next abrupt change frame of the live broadcast video is predicted through a prediction model, and if the prediction time interval is larger than the bullet screen display time, advertisement bullet screen output is conducted on different users according to an advertisement bullet screen insertion strategy.
The method comprises the steps of firstly obtaining advertisement video data to be launched, generating an advertisement characteristic extraction task to be issued into a block chain, processing and generating a character characteristic barrage, a voice characteristic barrage and an image characteristic barrage, then generating advertisement barrage output strategies aiming at different users, finally predicting the next mutation frame of a direct broadcast video through a prediction model when a direct broadcast active score H is larger than a threshold value, and outputting the advertisement barrage of different users according to an advertisement barrage inter-cut strategy if a prediction time interval is larger than barrage display time. According to the embodiment of the invention, the advertisement video is converted into the live video which is inserted into the live video in a bullet screen mode, so that the normal watching of the live video by a user is not interfered, and the advertisement enables the content to be more easily noticed by the user to be browsed through the bullet screen, so that the advertisement putting effect for the live video is better; the multi-task synchronous processing is carried out through the block chain, so that the task processing efficiency is improved, the redundancy generated by the server is reduced, and the server burden is reduced; classifying users through preset classification rules to generate advertisement bullet screen output strategies aiming at different types of users, so as to realize optimal advertisement putting effect; and meanwhile, predicting the next mutation frame of the live video through the prediction model can predict whether video mutation exists in the scene of the next section of live video, thereby avoiding interference of wonderful live content due to advertisement delivery.
Referring to fig. 2, the step of preprocessing the video advertisement data by S1 specifically includes:
s11, the server obtains the advertisement video data, and the audio separation is carried out on the advertisement video data to obtain the advertisement video and the advertisement audio;
and S12, performing frame extraction processing on the advertisement video to obtain an advertisement video frame image.
Further, fig. 3 is a schematic flow diagram of generating a text feature bullet screen according to an embodiment of the present invention, and referring to fig. 3, the generating of the text feature bullet screen by processing the text advertisement feature extraction task specifically includes:
s21, extracting subtitles of the advertisement video frame image;
s22, judging the repetition degree of the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the repetition times larger than a time threshold;
s23, if the repetition times are not more than the times threshold, respectively carrying out characteristic value Q judgment on the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the characteristic values more than the preset threshold, wherein the characteristic value Q is calculated through the following formula:
Figure BDA0002559947640000121
f1is the frequency of occurrence of the selected caption in the extracted caption, f0Is the frequency with which the selected caption appears in the standard corpus.
According to the embodiment of the invention, the repeated degree of the extracted subtitles is judged, and if subtitles with the repeated times larger than the time threshold exist, the highlighted subtitles are the key advertising words or key advertising words, so that the selected subtitles with the repeated times larger than the time threshold can be generated into the character characteristic bullet screen. And if the repetition times are not more than the time threshold, respectively judging the characteristic value Q of the extracted subtitles, and screening advertising words or advertising words through the characteristic value Q so as to convert the subtitles into character characteristic bulletin screens. By the method, the accuracy of extracting the advertising words or the advertising words from the subtitles is improved.
Fig. 4 is a schematic flow chart of S21 according to an embodiment of the present invention, and referring to fig. 4, the S21 specifically includes:
s211, preprocessing each advertisement video frame image to extract a caption delivery area, wherein the preprocessing comprises the following steps:
performing brightness conversion on each advertisement video frame image by the following formula:
Pm(x,y)=0.3·CR(x,y)+0.59·CG(x,y)+0.11·CB(x,y);
wherein R, G and B are a red component, a green component, a blue component, Pm(x, y) are luminance images, x and y are pixel locations in each image;
then, noise processing is carried out through the following formula to obtain a denoised image Pn(x,y):
Figure BDA0002559947640000131
Then, an extracted image is created by the following formula:
P1(x,y)=Pn(x,y)-Pn(x-1,y+1)
P2(x,y)=Pn(x,y)-Pn(x,y+1)
P3(x,y)=Pn(x,y)-Pn(x+1,y+1)
P4(x,y)=Pn(x,y)-Pn(x-1,y)
P5(x,y)=Pn(x,y)-Pn(x+1,y)
P6(x,y)=Pn(x,y)-Pn(x-1,y-1)
P7(x,y)=Pn(x,y)-Pn(x,y-1)
P8(x,y)=Pn(x,y)-Pn(x+1,y-1)
the invention creates an extracted image, namely a direction edge image, by a Roberts edge extraction method;
then, vertically projecting the extracted image to a caption string to obtain a caption putting area;
s212, judging whether the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image or not frame by frame, and if the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image and the absolute value of the pixel value difference of the current advertisement video frame image and the previous advertisement video frame image is smaller than a specific value, classifying the current advertisement video frame image and the previous advertisement video frame image into;
s213, calculating the simplification score of each advertisement video frame image in the same caption frame group by the following formula:
Figure BDA0002559947640000141
wherein K is a simplification score, fiThe pixel value of the ith sampling point in the advertisement video frame image is obtained, and m is the total number of the sampling points of the advertisement video frame image;
s214, extracting the image of the subtitle release area of the advertisement video frame image with the lowest simplification score, removing the background, and then performing character recognition to obtain a subtitle text.
According to the embodiment of the invention, the subtitles of the advertisement video frame image are extracted by the method to obtain the subtitle release area, the advertisement video frame image is grouped based on the subtitle release area, and the image of the subtitle release area of the advertisement video frame image with the lowest simplified score in the same subtitle frame group is extracted, so that a large amount of operations are avoided, and therefore, the subtitle extraction is efficient and the accuracy is high.
Further, fig. 5 is a schematic flow diagram of generating a speech feature barrage according to an embodiment of the present invention, and referring to fig. 5, the generating a speech feature barrage by processing a speech advertisement feature extraction task specifically includes:
s24, converting the advertisement audio into characters;
s25, judging the repetition degree of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the repetition times larger than the time threshold;
s26, if the repetition times is not more than the times threshold, judging a characteristic value Q of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the characteristic value more than the preset threshold, wherein the characteristic value Q is calculated by the following formula:
Figure BDA0002559947640000142
f1is the frequency of occurrence of the selected text in the converted text, f0Is the frequency of occurrence of the selected word in the standard corpus.
Further, fig. 6 is a schematic flow chart of generating an image feature bullet screen according to an embodiment of the present invention, and referring to fig. 6, the generating an image feature bullet screen by processing an image advertisement feature extraction task specifically includes:
s27, acquiring an advertisement video frame image set;
s28, acquiring a gray image of the advertisement video frame image, and performing edge detection on the gray image by using a Prewitt edge detection operator to generate an object contour image of the advertisement video frame image;
s29, carrying out binarization processing on the object outline image to generate a binarized image, and carrying out morphological closed operation processing to generate a closed operation processing image of the advertisement video frame image;
s30, acquiring a plurality of initial curves of the closed arithmetic processing image;
s31, substituting the information of the initial curve into a total energy functional E, wherein the formula of the total energy functional E is as follows:
Figure BDA0002559947640000151
Figure BDA0002559947640000152
AIl(b)=∫a∈Ωχ(a,b)dA,AEl(b)=∫a∈Ω=χ(a,b)dA,
where f denotes the image intensity, a and b denote spatial variables, Ω denotes the interior of the initial curve,
Figure BDA0002559947640000153
represents the outer portion of the initial curve, χ represents a local circle centered at b;
s32, solving the minimum value of the total energy functional E through a steepest descent method to obtain an evolution equation of a level set function;
s33, continuously iterating the evolution equation by a finite difference method until the level set function reaches a stable state, and selecting points on a zero level set to form an object contour;
s34, extracting the object outline in the advertisement video frame image set, judging whether the objects are the same according to the object outline, selecting the same objects with the quantity larger than the quantity threshold value to segment the object image, and generating an image characteristic bullet screen based on the object image.
The method extracts the object outline in the advertisement video frame image set, is efficient and accurate, judges whether the objects are the same according to the object outline, and selects the same objects with the quantity larger than the quantity threshold value to segment the object image, so that the advertisement objects are accurately extracted and used for generating the image characteristic bullet screen.
The constraint rule of the invention specifically comprises:
according to the number M of nodes0Setting distribution frequency threshold value M for character advertisement characteristic extraction task1Setting distribution time threshold M for voice advertisement characteristic extraction task2Setting distribution time threshold M for picture advertisement characteristic extraction task3Wherein M is1<M2<M3And M1+M2+M3=M0
The node randomly acquires one subtask, and if the distributed times of the corresponding subtask reach a threshold value, the node selects to acquire other subtasks;
after the node finishes any subtask, broadcasting in the block chain, updating the distribution time threshold of other subtasks according to the distribution time threshold proportion, finishing the task node to randomly obtain one other subtask, and if the distribution time of the other subtask reaches the threshold, the node selects to obtain the rest subtask;
and after the node finishes the two subtasks, broadcasting in the block chain, updating the distribution frequency threshold of the remaining subtasks, and finishing the task node to acquire the remaining subtasks until the remaining subtasks are finished.
In the embodiment of the invention, the block chain nodes carry out multi-task synchronous processing by the method and different distribution thresholds are set for different tasks, so that different numbers of nodes are correspondingly distributed for processing according to the difficulty of the tasks, the multi-task is completed as close to synchronization as possible, and after any subtask is completed, other unfinished tasks are redistributed, the computing power of each node is fully exerted, and the task completion efficiency is improved.
Fig. 8 is a schematic flow chart of user classification and barrage output provided in an embodiment of the present invention, and referring to fig. 8, the server classifies users according to a preset classification rule, which specifically includes:
the user characteristic data comprises user registration information, user bullet screen historical data and user payment data, and the server performs primary classification on the users according to the user registration information and divides the users into new users and old users;
performing secondary classification on old users according to the user payment data, and dividing the old users into member users and non-member users;
calculating a user activity score G according to the bullet screen data and the user bullet screen historical data, performing three-level classification on the non-member users by comparing with an activity score threshold value, and dividing the non-member users into non-member high-activity users and non-member low-activity users, wherein the calculation formula of the user activity score G is as follows:
Figure BDA0002559947640000171
wherein x is1Number of the bullet screens in the bullet screen data, y1Number of speech barrages, z, in barrage data1To be shotNumber of speech barrages, x, in the barrage data2The number y of character barrages in the user barrage historical data2For the number of voice barrages, z, in the user barrage history data2For the number of picture barrages in the user barrage history data, a1The number of repeated character bullet screens in bullet screen data, b1The number of repetitions of the speech bullet screen in the bullet screen data, c1For the number of repetitions of the bullet screen data picture words, a2For the number of repetitions of the text bullet screen in the user bullet screen history data, b2For the number of repetitions of the speech barrage in the user barrage history data, c2The number of image barrage repetitions in the user barrage historical data, A is the weight of the character barrage, B is the weight of the voice barrage, C is the weight of the picture barrage, W1Is a first correction parameter, W2As a second correction parameter, W3Is a first correction parameter, W4Is a fourth correction parameter;
the advertisement bullet screen output strategy specifically comprises the following steps:
outputting a character characteristic bullet screen, a voice characteristic bullet screen and an image characteristic bullet screen for a new user;
for member users, advertisement bullet screen output is not performed;
outputting a text characteristic bullet screen and a voice characteristic bullet screen aiming at the non-member high-activity users;
and outputting a text characteristic bullet screen and an image characteristic bullet screen for users with non-meeting low liveness.
According to the embodiment of the invention, the users are classified through the preset classification rules, and the advertisement bullet screen output strategies aiming at different users are generated according to the classification results, so that the maximum advertisement putting effect is realized. For a new user, outputting a character characteristic barrage, a voice characteristic barrage and an image characteristic barrage, wherein the maximum advertisement exposure degree is realized no matter the user watches an unobtrusive character barrage or an obvious image barrage or clicks the voice barrage with curiosity; aiming at the member user, the advertisement barrage output is not carried out, so that the better experience of the member user is realized; aiming at users with non-meeting low liveness, outputting a character characteristic bullet screen and an image characteristic bullet screen, so that the advertisement can be exposed more directly; to the high liveness user of non-member, carry out characters characteristic barrage and speech characteristic barrage output, because the high liveness user of non-member can be based on between anchor and the spectator, spectator and spectator's interchange between the spectator, can initiatively browse characters characteristic barrage and speech characteristic barrage, and the frequency is higher moreover, consequently, can realize the advertisement through characters characteristic barrage and speech characteristic barrage and fully expose, avoid moreover because of the image barrage experience exists influence barrage and exchange.
It should be noted that, the calculation formula of the live activity score H is as follows:
Figure BDA0002559947640000181
wherein, α(t)Number of users who have issued barrages for time t, gamma(t)Total number of users at time t, GγIs the activity score of the gamma user, T0(t)Total live broadcast duration at time T, T1(t)The total length of the anchor silence at time t.
Fig. 9 is a schematic flow chart of abrupt frame prediction provided in an embodiment of the present invention, and referring to fig. 9, the predicting a next abrupt frame of a live video by using a prediction model specifically includes:
the histogram of the DC image of the video frame of the video data is acquired and normalized according to the following formula, and is divided into training data and test data according to the proportion,
Figure BDA0002559947640000182
wherein,
Figure BDA0002559947640000183
histogram of DC image of real video frame corresponding to ith time sequence after normalization processing, xiA histogram of a DC image of a real video frame corresponding to the ith time sequence is obtained, u is an average value of the histograms of the DC images of the real video frames, and sigma is a standard deviation;
constructing an LSTM neural network based on a prediction model, and training through training data;
predicting the mutation state of the video frame at the time t through the trained LSTM neural network, and determining the mutation state as a mutation frame when the mutation state meets the requirement, so as to complete the prediction of the next mutation frame;
wherein the LSTM neural network comprises an input layer, an LSTM cell layer and an output layer; the LSTM cell layer is internally provided with a plurality of thresholds including a forgetting gate f(t)And input gate i(t)And an output gate o(t)(ii) a And the process of forward propagation of the LSTM neural network at each sequence index position is:
updating the forget gate output:
f(t)=σ(W(fx)x(t)+W(fh)h(t-1)+b(f));
update input gate two part output:
i(t)=σ(W(ix)x(t)+W(ih)h(t-1)+b(i)),g(t)=tanh(W(gx)x(t)+W(gh)h(t-1)+b(g));
and (3) updating the cell state:
Figure BDA0002559947640000191
updating output gate output:
o(t)=σ(W(ox)x(t)+W(oh)h(t-1)+b(o)),
Figure BDA0002559947640000192
an incoming time attention mechanism is introduced:
Figure BDA0002559947640000193
the loss function for the LSTM neural network is defined as follows:
Figure BDA0002559947640000194
where σ denotes the sigma function, ⊙ denotes the Hadamard product, W(fx)、W(fh)、W(ix)、W(ih)、W(gx)、W(gh)、W(ox)、W(oh)、W(cc)、W(ch)Represents a weight, b(f)、b(i)、b(g)、b(o)
Figure BDA0002559947640000195
The offset is represented by the number of bits in the bit,
Figure BDA0002559947640000196
cell status at time t, h(t)For hidden state at time t, N is the number of training samples, ytFor the real mutation information at the time t,
Figure BDA0002559947640000197
the mutation information is predicted for time t,
Figure BDA0002559947640000198
by passing
Figure BDA0002559947640000199
Calculation of W(s)Represents a weight, b(s)Denotes the offset, T(n)The number of positions selected for the training sample is predicted for the nth mutation.
In addition, the present invention extends the loss function for continuous learning by the following formula:
Figure BDA00025599476400001910
where i is a neural network parameter, θiFor neural network parameter sets, θA,iIs the previous task weight, LB(theta) is the latter task loss function, lambda is the discount factor, FiIs a Fisher information matrix.
According to the embodiment of the invention, the next mutation frame of the live video is predicted by the prediction model based on the time sequence, the previous residual information can be fully utilized to improve the prediction performance, the prediction is accurate, and the model robustness is good.
The embodiment of the invention further provides an embodiment of a device for realizing the steps and the method in the embodiment of the method.
Please refer to fig. 10, which is a block chain-based data processing system according to an embodiment of the present invention, the system including: the system comprises a server, a main broadcasting end, a user end, a plurality of nodes and a block chain.
Fig. 11 is a schematic block diagram of a server according to an embodiment of the present invention, and referring to fig. 11, the server includes:
the receiving unit is used for acquiring advertisement video data to be launched;
the processing unit is used for preprocessing video advertisement video data, acquiring video live broadcast data, classifying users through a deep learning model according to the barrage data, the user identifications and the user characteristic data, and generating advertisement barrage output strategies aiming at different types of users; the system comprises a prediction model, a video module, a bullet screen display module, a video module, a display module and a display module, wherein the prediction model is used for calculating a live broadcast activity score H according to the video data and the bullet screen data, predicting a next key frame through the prediction model if the live broadcast activity score H is larger than a threshold value, and outputting an advertisement bullet screen to different users according to an advertisement bullet screen insertion strategy if a prediction time interval is larger than bullet screen display time;
the publishing unit is used for publishing the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
the anchor end is used for live video, sending and receiving the barrage and performing data interaction with the server;
the client is used for watching the live video, sending and receiving the barrage and performing data interaction with the server;
fig. 12 is a schematic block diagram of a node according to an embodiment of the present invention, and referring to fig. 12, the node includes:
the acquisition module is used for acquiring a subtask from the block chain according to a preset constraint rule;
the processing module is used for processing the text advertisement feature extraction task to generate a text feature barrage, processing the voice advertisement feature extraction task to generate a voice feature barrage, and processing the image advertisement feature extraction task to generate an image feature barrage;
and the sending module is used for sending the character characteristic bullet screen, the voice characteristic bullet screen and the image characteristic bullet screen to the server.
Fig. 13 is a system architecture diagram of a blockchain according to an embodiment of the present invention, and referring to fig. 13, the blockchain includes:
the storage layer is used for recording the node data and the server data;
the interaction layer is used for carrying out data interaction with the nodes and the server;
the processing layer is used for the nodes to reach consensus and generating, trading and recording the reward blocks based on the constraint layer;
and the constraint layer is used for establishing a block chain constraint rule.
Since each unit module in the embodiment can execute the method shown in fig. 1, reference may be made to the related description of fig. 1 for a part of the embodiment that is not described in detail. FIG. 14 is a hardware schematic of a system according to an embodiment of the invention. Referring to fig. 14, at a hardware level, the system includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the system may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 14, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
In a possible implementation manner, the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs the computer program, and the corresponding computer program can also be acquired from other equipment so as to form the corresponding apparatus on a logic level. And the processor executes the program stored in the memory so as to realize the advertisement insertion method provided by any embodiment of the invention through the executed program.
Embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a system comprising a plurality of application programs, enable the system to perform the advertisement insertion method provided in any of the embodiments of the present invention.
The method performed by the system according to the embodiment of the present invention may be implemented in or by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Embodiments of the present invention also provide a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by a system including a plurality of application programs, enable the system to perform the system operation method provided in any of the embodiments of the present invention.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer.
For convenience of description, the above devices are described as being divided into various units or modules by function, respectively. Of course, the functionality of the units or modules may be implemented in the same one or more software and/or hardware when implementing the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. A data processing method based on a block chain is characterized by comprising the following steps:
s1, the server acquires advertisement video data to be launched, preprocesses the video advertisement video data to generate an advertisement feature extraction task and releases the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
s2, the node acquires a subtask from the block chain according to a preset constraint rule, the node generates a text feature bullet screen by processing the text advertisement feature extraction task, generates a voice feature bullet screen by processing the voice advertisement feature extraction task, and generates an image feature bullet screen by processing the image advertisement feature extraction task;
s3, the server acquires live video data, wherein the live video data comprise video data, barrage data, user identification and user characteristic data, and the server classifies the users through preset classification rules to generate advertisement barrage output strategies for different types of users;
s4, the server calculates live broadcast activity score H, if the live broadcast activity score H is larger than a threshold value, the next abrupt change frame of the live broadcast video is predicted through a prediction model, and if the prediction time interval is larger than the bullet screen display time, advertisement bullet screen output is conducted on different users according to an advertisement bullet screen insertion strategy.
2. The method according to claim 1, wherein the step S1 of preprocessing the video advertisement data specifically comprises:
s11, the server obtains the advertisement video data, and the audio separation is carried out on the advertisement video data to obtain the advertisement video and the advertisement audio;
and S12, performing frame extraction processing on the advertisement video to obtain an advertisement video frame image.
3. The method according to claim 2, wherein the generating of the text feature barrage by processing the text advertisement feature extraction task specifically comprises:
s21, extracting subtitles of the advertisement video frame image;
s22, judging the repetition degree of the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the repetition times larger than a time threshold;
s23, if the repetition times are not more than the times threshold, respectively carrying out characteristic value Q judgment on the extracted subtitles, and generating a character characteristic bullet screen for the selected subtitles with the characteristic values more than the preset threshold, wherein the characteristic value Q is calculated through the following formula:
Figure FDA0002559947630000021
f1is the frequency of occurrence of the selected caption in the extracted caption, f0Is the frequency of occurrence of the selected caption in the standard corpus;
the step of processing the voice advertisement feature extraction task to generate the voice feature bullet screen specifically comprises the following steps:
s24, converting the advertisement audio into characters;
s25, judging the repetition degree of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the repetition times larger than the time threshold;
s26, if the repetition times is not more than the times threshold, judging a characteristic value Q of the characters, and generating a voice characteristic bullet screen by the voice corresponding to the selected characters with the characteristic value more than the preset threshold, wherein the characteristic value Q is calculated by the following formula:
Figure FDA0002559947630000022
f1is the frequency of occurrence of the selected text in the converted text, f0Is the frequency of occurrence of the selected word in the standard corpus;
the step of processing the image advertisement feature extraction task to generate the image feature bullet screen specifically comprises the following steps:
s27, acquiring an advertisement video frame image set;
s28, acquiring a gray image of the advertisement video frame image, and performing edge detection on the gray image by using a Prewitt edge detection operator to generate an object contour image of the advertisement video frame image;
s29, carrying out binarization processing on the object outline image to generate a binarized image, and carrying out morphological closed operation processing to generate a closed operation processing image of the advertisement video frame image;
s30, acquiring a plurality of initial curves of the closed arithmetic processing image;
s31, substituting the information of the initial curve into a total energy functional E, wherein the formula of the total energy functional E is as follows:
Figure FDA0002559947630000023
Figure FDA0002559947630000024
Figure FDA0002559947630000031
where f denotes the image intensity, a and b denote spatial variables, Ω denotes the interior of the initial curve,
Figure FDA0002559947630000032
represents the outer portion of the initial curve, χ represents a local circle centered at b;
s32, solving the minimum value of the total energy functional E through a steepest descent method to obtain an evolution equation of a level set function;
s33, continuously iterating the evolution equation by a finite difference method until the level set function reaches a stable state, and selecting points on a zero level set to form an object contour;
s34, extracting the object outline in the advertisement video frame image set, judging whether the objects are the same according to the object outline, selecting the same objects with the quantity larger than the quantity threshold value to segment the object image, and generating an image characteristic bullet screen based on the object image.
4. The method according to claim 3, wherein the S21 specifically comprises:
s211, preprocessing each advertisement video frame image to extract a caption delivery area, wherein the preprocessing comprises the following steps:
performing brightness conversion on each advertisement video frame image by the following formula:
Pm(x,y)=0.3·CR(x,y)+0.59·CG(x,y)+0.11·CB(x,y);
wherein R, G and B are a red component, a green component, a blue component, Pm(x, y) are luminance images, x and y are pixel locations in each image;
then, noise processing is carried out through the following formula to obtain a denoised image Pn(x,y):
Figure FDA0002559947630000033
Then, an extracted image is created by the following formula:
P1(x,y)=Pn(x,y)-Pn(x-1,y+1)
P2(x,y)=Pn(x,y)-Pn(x,y+1)
P3(x,y)=Pn(x,y)-Pn(x+1,y+1)
P4(x,y)=Pn(x,y)-Pn(x-1,y)
P5(x,y)=Pn(x,y)-Pn(x+1,y)
P6(x,y)=Pn(x,y)-Pn(x-1,y-1)
P7(x,y)=Pn(x,y)-Pn(x,y-1)
P8(x,y)=Pn(x,y)-Pn(x+1,y-1)
then, vertically projecting the extracted image to a caption string to obtain a caption putting area;
s212, judging whether the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image or not frame by frame, and if the subtitle release area is the same as the subtitle release area of the previous advertisement video frame image and the absolute value of the pixel value difference of the current advertisement video frame image and the previous advertisement video frame image is smaller than a specific value, classifying the current advertisement video frame image and the previous advertisement video frame image into;
s213, calculating the simplification score of each advertisement video frame image in the same caption frame group by the following formula:
Figure FDA0002559947630000041
wherein K is a simplification score, fiThe pixel value of the ith sampling point in the advertisement video frame image is obtained, and m is the total number of the sampling points of the advertisement video frame image;
s214, extracting the image of the subtitle release area of the advertisement video frame image with the lowest simplification score, removing the background, and then performing character recognition to obtain a subtitle text.
5. The method according to claim 1, wherein the constraint rules specifically include:
according to the number M of nodes0Characteristic of advertising for charactersThreshold value M for setting distribution times of extraction tasks1Setting distribution time threshold M for voice advertisement characteristic extraction task2Setting distribution time threshold M for picture advertisement characteristic extraction task3Wherein M is1<M2<M3And M1+M2+M3=M0
The node randomly acquires one subtask, and if the distributed times of the corresponding subtask reach a threshold value, the node selects to acquire other subtasks;
after the node finishes any subtask, broadcasting in the block chain, updating the distribution frequency threshold of other subtasks in proportion, finishing the task node to randomly obtain one other subtask, and if the distribution frequency of the other subtasks reaches the threshold, the node selects to obtain the rest subtasks;
and after the node finishes the two subtasks, broadcasting in the block chain, updating the distribution frequency threshold of the remaining subtasks, and finishing the task node to acquire the remaining subtasks until the remaining subtasks are finished.
6. The method according to claim 1, wherein the server classifies the users according to a preset classification rule, specifically comprising:
the user characteristic data comprises user registration information, user bullet screen historical data and user payment data, and the server performs primary classification on the users according to the user registration information and divides the users into new users and old users;
performing secondary classification on old users according to the user payment data, and dividing the old users into member users and non-member users;
calculating a user activity score G according to the bullet screen data and the user bullet screen historical data, performing three-level classification on the non-member users by comparing with an activity score threshold value, and dividing the non-member users into non-member high-activity users and non-member low-activity users, wherein the calculation formula of the user activity score G is as follows:
Figure FDA0002559947630000051
wherein x is1Number of the bullet screens in the bullet screen data, y1Number of speech barrages, z, in barrage data1Number of speech barrages, x, in barrage data2The number y of character barrages in the user barrage historical data2For the number of voice barrages, z, in the user barrage history data2For the number of picture barrages in the user barrage history data, a1The number of repeated character bullet screens in bullet screen data, b1The number of repetitions of the speech bullet screen in the bullet screen data, c1For the number of repetitions of the bullet screen data picture words, a2For the number of repetitions of the text bullet screen in the user bullet screen history data, b2For the number of repetitions of the speech barrage in the user barrage history data, c2The number of image barrage repetitions in the user barrage historical data, A is the weight of the character barrage, B is the weight of the voice barrage, C is the weight of the picture barrage, W1Is a first correction parameter, W2As a second correction parameter, W3Is a first correction parameter, W4Is a fourth correction parameter;
the advertisement bullet screen output strategy specifically comprises the following steps:
outputting a character characteristic bullet screen, a voice characteristic bullet screen and an image characteristic bullet screen for a new user;
for member users, advertisement bullet screen output is not performed;
outputting a text characteristic bullet screen and a voice characteristic bullet screen aiming at the non-member high-activity users;
and outputting a text characteristic bullet screen and an image characteristic bullet screen for users with non-meeting low liveness.
7. The method of claim 6, wherein the live activity score H is calculated as follows:
Figure FDA0002559947630000061
wherein, α(t)Number of users who have issued barrages for time t, gamma(t)Total number of users at time t, GγIs the activity score of the gamma user, T0(t)Total live broadcast duration at time T, T1(t)The total length of the anchor silence at time t.
8. The method according to claim 1, wherein the predicting the next abrupt change frame of the live video by the prediction model comprises:
the histogram of the DC image of the video frame of the video data is acquired and normalized according to the following formula, and is divided into training data and test data according to the proportion,
Figure FDA0002559947630000062
wherein,
Figure FDA0002559947630000063
histogram of DC image of real video frame corresponding to ith time sequence after normalization processing, xiA histogram of a DC image of a real video frame corresponding to the ith time sequence is obtained, u is an average value of the histograms of the DC images of the real video frames, and sigma is a standard deviation;
constructing an LSTM neural network based on a prediction model, and training through training data;
predicting the abrupt change state of the video frame at the t moment through the trained LSTM neural network so as to complete the prediction of the next abrupt change frame;
wherein the LSTM neural network comprises an input layer, an LSTM cell layer and an output layer; the LSTM cell layer is internally provided with a plurality of thresholds including a forgetting gate f(t)And input gate i(t)And an output gate o(t)(ii) a And the process of forward propagation of the LSTM neural network at each sequence index position is:
updating the forget gate output:
f(t)=σ(W(fx)x(t)+W(fh)h(t-1)+b(f));
update input gate two part output:
i(t)=σ(W(ix)x(t)+W(ih)h(t-1)+b(i)),g(t)=tanh(W(gx)x(t)+W(gh)h(t-1)+b(g));
and (3) updating the cell state:
Figure FDA0002559947630000071
updating output gate output:
o(t)=σ(W(ox)x(t)+W(oh)h(t-1)+b(o)),
Figure FDA0002559947630000072
an incoming time attention mechanism is introduced:
Figure FDA0002559947630000073
the loss function for the LSTM neural network is defined as follows:
Figure FDA0002559947630000074
where σ denotes the sigma function, ⊙ denotes the Hadamard product, W(fx)、W(fh)、W(ix)、W(ih)、W(gx)、W(gh)、W(ox)、W(oh)、W(cc)、W(ch)Represents a weight, b(f)、b(i)、b(g)、b(o)
Figure FDA0002559947630000075
The offset is represented by the number of bits in the bit,
Figure FDA0002559947630000076
cell status at time t, h(t)For hidden state at time t, N is the number of training samples, ytFor the real mutation information at the time t,
Figure FDA0002559947630000077
the mutation information is predicted for time t,
Figure FDA0002559947630000078
by passing
Figure FDA0002559947630000079
Calculation of W(s)Represents a weight, b(s)Denotes the offset, T(n)The number of positions selected for the training sample is predicted for the nth mutation.
9. The method of claim 8, wherein the loss function is augmented for continuous learning by the following formula:
Figure FDA00025599476300000710
where i is a neural network parameter, θiFor neural network parameter sets, θA,iIs the previous task weight, LB(theta) is the latter task loss function, lambda is the discount factor, FiIs a Fisher information matrix.
10. A blockchain-based data processing system, the system comprising:
a server, the server comprising:
the receiving unit is used for acquiring advertisement video data to be launched;
the processing unit is used for preprocessing video advertisement video data, acquiring video live broadcast data, classifying users through a deep learning model according to the barrage data, the user identifications and the user characteristic data, and generating advertisement barrage output strategies aiming at different types of users; the system comprises a prediction model, a video module, a bullet screen display module, a video module, a display module and a display module, wherein the prediction model is used for calculating a live broadcast activity score H according to the video data and the bullet screen data, predicting a next key frame through the prediction model if the live broadcast activity score H is larger than a threshold value, and outputting an advertisement bullet screen to different users according to an advertisement bullet screen insertion strategy if a prediction time interval is larger than bullet screen display time;
the publishing unit is used for publishing the advertisement feature extraction task to the block chain, wherein the advertisement feature extraction task comprises three subtasks: a text advertisement characteristic extraction task, a voice advertisement characteristic extraction task and an image advertisement characteristic extraction task;
the anchor end is used for live video, sending and receiving the barrage and performing data interaction with the server;
the client is used for watching the live video, sending and receiving the barrage and performing data interaction with the server;
a plurality of nodes, the nodes comprising:
the acquisition module is used for acquiring a subtask from the block chain according to a preset constraint rule;
the processing module is used for processing the text advertisement feature extraction task to generate a text feature barrage, processing the voice advertisement feature extraction task to generate a voice feature barrage, and processing the image advertisement feature extraction task to generate an image feature barrage;
the sending module is used for sending the character characteristic barrage, the voice characteristic barrage and the image characteristic barrage to the server;
a blockchain, the blockchain comprising:
the storage layer is used for recording the node data and the server data;
the interaction layer is used for carrying out data interaction with the nodes and the server;
the processing layer is used for the nodes to reach consensus and generating, trading and recording the reward blocks based on the constraint layer;
and the constraint layer is used for establishing a block chain constraint rule.
CN202010603381.8A 2020-06-29 2020-06-29 Data processing method and system based on block chain Active CN111754267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010603381.8A CN111754267B (en) 2020-06-29 2020-06-29 Data processing method and system based on block chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010603381.8A CN111754267B (en) 2020-06-29 2020-06-29 Data processing method and system based on block chain

Publications (2)

Publication Number Publication Date
CN111754267A true CN111754267A (en) 2020-10-09
CN111754267B CN111754267B (en) 2021-04-20

Family

ID=72677903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010603381.8A Active CN111754267B (en) 2020-06-29 2020-06-29 Data processing method and system based on block chain

Country Status (1)

Country Link
CN (1) CN111754267B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930844A (en) * 2020-08-11 2020-11-13 罗忠明 Financial prediction system based on block chain and artificial intelligence
CN112699787A (en) * 2020-12-30 2021-04-23 湖南快乐阳光互动娱乐传媒有限公司 Method and device for detecting advertisement insertion time point
CN113191293A (en) * 2021-05-11 2021-07-30 创新奇智(重庆)科技有限公司 Advertisement detection method, device, electronic equipment, system and readable storage medium
CN113256342A (en) * 2021-06-09 2021-08-13 广州虎牙科技有限公司 Live broadcast target user estimation method and device, electronic equipment and computer readable storage medium
CN113487201A (en) * 2021-07-14 2021-10-08 海南马良师傅网络科技有限公司 Instrument relocation task distribution system
CN115701071A (en) * 2021-07-16 2023-02-07 中移物联网有限公司 Model training method and device, electronic equipment and storage medium
CN116228320A (en) * 2023-03-01 2023-06-06 深圳市快美妆科技有限公司 Live advertisement putting effect analysis system and method
CN117743813A (en) * 2024-02-05 2024-03-22 蓝色火焰科技成都有限公司 Advertisement classification evaluation method, device and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016004859A1 (en) * 2014-07-07 2016-01-14 乐视网信息技术(北京)股份有限公司 Method and device for video barrage display
CN105916051A (en) * 2016-05-12 2016-08-31 乐视控股(北京)有限公司 Content recommendation method and device
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
WO2017097149A1 (en) * 2015-12-10 2017-06-15 中兴通讯股份有限公司 Live comment implementation method for broadcast television terminal and broadcast television system server
CN106911936A (en) * 2017-03-01 2017-06-30 北京牡丹电子集团有限责任公司数字电视技术中心 Dynamic video flowing picture covering method
CN107332871A (en) * 2017-05-18 2017-11-07 百度在线网络技术(北京)有限公司 Advertisement sending method and device
CN108109019A (en) * 2018-01-16 2018-06-01 深圳市瑞致达科技有限公司 Barrage advertisement placement method, device, system and readable storage medium storing program for executing
CN108174309A (en) * 2018-01-16 2018-06-15 深圳市瑞致达科技有限公司 Barrage advertisement broadcast method, barrage advertisement play back device and readable storage medium storing program for executing
CN108235103A (en) * 2018-01-16 2018-06-29 深圳市瑞致达科技有限公司 Advertisement intelligent playback method, device, system and readable storage medium storing program for executing
CN108322788A (en) * 2018-02-09 2018-07-24 武汉斗鱼网络科技有限公司 Advertisement demonstration method and device in a kind of net cast
CN109391848A (en) * 2017-08-03 2019-02-26 掌游天下(北京)信息技术股份有限公司 A kind of interactive advertisement system
CN109729436A (en) * 2017-10-31 2019-05-07 腾讯科技(深圳)有限公司 Advertisement barrage treating method and apparatus
CN109874024A (en) * 2019-02-02 2019-06-11 天脉聚源(北京)科技有限公司 A kind of barrage processing method, system and storage medium based on dynamic video poster
CN110401855A (en) * 2018-04-25 2019-11-01 腾讯科技(深圳)有限公司 Information displaying method, processing platform, calculates equipment and storage medium at device
CN110837615A (en) * 2019-11-05 2020-02-25 福建省趋普物联科技有限公司 Artificial intelligent checking system for advertisement content information filtering

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016004859A1 (en) * 2014-07-07 2016-01-14 乐视网信息技术(北京)股份有限公司 Method and device for video barrage display
WO2017097149A1 (en) * 2015-12-10 2017-06-15 中兴通讯股份有限公司 Live comment implementation method for broadcast television terminal and broadcast television system server
CN105916051A (en) * 2016-05-12 2016-08-31 乐视控股(北京)有限公司 Content recommendation method and device
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN106911936A (en) * 2017-03-01 2017-06-30 北京牡丹电子集团有限责任公司数字电视技术中心 Dynamic video flowing picture covering method
CN107332871A (en) * 2017-05-18 2017-11-07 百度在线网络技术(北京)有限公司 Advertisement sending method and device
CN109391848A (en) * 2017-08-03 2019-02-26 掌游天下(北京)信息技术股份有限公司 A kind of interactive advertisement system
CN109729436A (en) * 2017-10-31 2019-05-07 腾讯科技(深圳)有限公司 Advertisement barrage treating method and apparatus
CN108109019A (en) * 2018-01-16 2018-06-01 深圳市瑞致达科技有限公司 Barrage advertisement placement method, device, system and readable storage medium storing program for executing
CN108235103A (en) * 2018-01-16 2018-06-29 深圳市瑞致达科技有限公司 Advertisement intelligent playback method, device, system and readable storage medium storing program for executing
CN108174309A (en) * 2018-01-16 2018-06-15 深圳市瑞致达科技有限公司 Barrage advertisement broadcast method, barrage advertisement play back device and readable storage medium storing program for executing
CN108322788A (en) * 2018-02-09 2018-07-24 武汉斗鱼网络科技有限公司 Advertisement demonstration method and device in a kind of net cast
CN110401855A (en) * 2018-04-25 2019-11-01 腾讯科技(深圳)有限公司 Information displaying method, processing platform, calculates equipment and storage medium at device
CN109874024A (en) * 2019-02-02 2019-06-11 天脉聚源(北京)科技有限公司 A kind of barrage processing method, system and storage medium based on dynamic video poster
CN110837615A (en) * 2019-11-05 2020-02-25 福建省趋普物联科技有限公司 Artificial intelligent checking system for advertisement content information filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李飞宇: ""小众视频网站的盈利模式分析——以 B 站为例"", 《时代金融》 *
芦亚林: ""弹幕视频的使用形式与功能探析"", 《柳州职业技术学院学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930844B (en) * 2020-08-11 2021-09-24 肖岩 Financial prediction system based on block chain and artificial intelligence
CN111930844A (en) * 2020-08-11 2020-11-13 罗忠明 Financial prediction system based on block chain and artificial intelligence
CN112699787B (en) * 2020-12-30 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 Advertisement insertion time point detection method and device
CN112699787A (en) * 2020-12-30 2021-04-23 湖南快乐阳光互动娱乐传媒有限公司 Method and device for detecting advertisement insertion time point
CN113191293A (en) * 2021-05-11 2021-07-30 创新奇智(重庆)科技有限公司 Advertisement detection method, device, electronic equipment, system and readable storage medium
CN113256342A (en) * 2021-06-09 2021-08-13 广州虎牙科技有限公司 Live broadcast target user estimation method and device, electronic equipment and computer readable storage medium
CN113256342B (en) * 2021-06-09 2024-04-30 广州虎牙科技有限公司 Live target user estimation method, device, electronic equipment and computer readable storage medium
CN113487201B (en) * 2021-07-14 2022-11-11 海南马良师傅网络科技有限公司 Instrument relocation task distribution system
CN113487201A (en) * 2021-07-14 2021-10-08 海南马良师傅网络科技有限公司 Instrument relocation task distribution system
CN115701071A (en) * 2021-07-16 2023-02-07 中移物联网有限公司 Model training method and device, electronic equipment and storage medium
CN116228320A (en) * 2023-03-01 2023-06-06 深圳市快美妆科技有限公司 Live advertisement putting effect analysis system and method
CN116228320B (en) * 2023-03-01 2024-02-06 广州网优优数据技术股份有限公司 Live advertisement putting effect analysis system and method
CN117743813A (en) * 2024-02-05 2024-03-22 蓝色火焰科技成都有限公司 Advertisement classification evaluation method, device and storage medium
CN117743813B (en) * 2024-02-05 2024-04-23 蓝色火焰科技成都有限公司 Advertisement classification evaluation method, device and storage medium

Also Published As

Publication number Publication date
CN111754267B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN111754267B (en) Data processing method and system based on block chain
CN110012302B (en) Live network monitoring method and device and data processing method
CN109145784B (en) Method and apparatus for processing video
CN110582025B (en) Method and apparatus for processing video
CN109165573B (en) Method and device for extracting video feature vector
US20220172476A1 (en) Video similarity detection method, apparatus, and device
CN109618236B (en) Video comment processing method and device
EP4086786A1 (en) Video processing method, video searching method, terminal device, and computer-readable storage medium
CN107172482B (en) Method and device for generating image with interchangeable format
CN111985419B (en) Video processing method and related equipment
CN112818737A (en) Video identification method and device, storage medium and terminal
CN110414335A (en) Video frequency identifying method, device and computer readable storage medium
CN113315979A (en) Data processing method and device, electronic equipment and storage medium
WO2022022075A1 (en) Video processing method, living streaming processing method, live streaming system, electronic device, terminal, and medium
CN110019951B (en) Method and equipment for generating video thumbnail
CN116567351B (en) Video processing method, device, equipment and medium
CN116403599B (en) Efficient voice separation method and model building method thereof
CN111046232B (en) Video classification method, device and system
CN112860941A (en) Cover recommendation method, device, equipment and medium
CN112183946A (en) Multimedia content evaluation method, device and training method thereof
CN113377972A (en) Multimedia content recommendation method and device, computing equipment and storage medium
CN111353330A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116261009A (en) Video detection method, device, equipment and medium for intelligently converting video audience
CN111601116B (en) Live video advertisement insertion method and system based on big data
CN114398517A (en) Video data acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Tan Xiaogan

Inventor after: Xing Wenchao

Inventor before: Xing Wenchao

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20210401

Address after: Building 4, 5 and 6, phase II, Jinghua chuangmeng space, 350 Jinghua Road, high tech Zone, Ningbo, Zhejiang 315000

Applicant after: ZHEJIANG DTCT DATA TECHNOLOGY Co.,Ltd.

Address before: 233000 room 1019, 10th floor, unit 0, building B, Wanda Plaza Apartment, Bengbu City, Anhui Province

Applicant before: BENGBU KERUIDA MACHINERY DESIGN Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant