CN113220941B - Video type obtaining method and device based on multiple models and electronic equipment - Google Patents

Video type obtaining method and device based on multiple models and electronic equipment Download PDF

Info

Publication number
CN113220941B
CN113220941B CN202110607865.4A CN202110607865A CN113220941B CN 113220941 B CN113220941 B CN 113220941B CN 202110607865 A CN202110607865 A CN 202110607865A CN 113220941 B CN113220941 B CN 113220941B
Authority
CN
China
Prior art keywords
behavior
feature
type
video
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110607865.4A
Other languages
Chinese (zh)
Other versions
CN113220941A (en
Inventor
韦嘉楠
郑权
周超勇
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110607865.4A priority Critical patent/CN113220941B/en
Publication of CN113220941A publication Critical patent/CN113220941A/en
Application granted granted Critical
Publication of CN113220941B publication Critical patent/CN113220941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles

Abstract

The invention relates to the field of classification models, and discloses a video type acquisition method based on multiple models, which comprises the following steps: acquiring a behavior feature set, and classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set; inputting the first behavior feature set and the second behavior feature set into the improved classification model, selecting a first feature subset from the first behavior feature set and the second behavior feature set by using a feature distribution layer, sending the first feature subset to the first classification submodel, and performing click classification on a first video playing set corresponding to the first feature subset by using the first classification submodel to obtain a first type quantity set and a second type quantity set; and performing weight calculation on the first type quantity set and the second type quantity set, and selecting the video type in which the target user is interested from the weight calculation result. The method and the device can realize the rapid judgment of the video type which is interested by the user, and further improve the accuracy of pushing the video type which is interested by the user.

Description

Video type obtaining method and device based on multiple models and electronic equipment
Technical Field
The invention relates to the field of classification models, in particular to a video type obtaining method and device based on multiple models, electronic equipment and a computer readable storage medium.
Background
With the continuous development of communication technology, the threshold of video playing is gradually reduced, short videos gradually become the main stream of times entertainment, and the method for pushing interested videos to target users becomes a main mode for various large main stream media to attract users.
At present, the video type preferred by a user is mainly judged by adopting a user praise mode, but the problems that the user forgets praise or the user does not praise with care and the like sometimes exist, so that more and more accurate data volume cannot be obtained, the prediction result obtained by a prediction model of a push system is not accurate enough, the video of interest of the user cannot be accurately obtained, and further the video push is not accurate.
Disclosure of Invention
The invention provides a video type obtaining method and device based on multiple models, electronic equipment and a computer readable storage medium, and mainly aims to quickly judge video types interested by users and further improve the accuracy of pushing the video types interested by the users.
In order to achieve the above object, the present invention provides a method for acquiring a video type based on multiple models, which includes:
the method comprises the steps of obtaining a behavior feature set of a target user playing a plurality of videos, and classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the method comprises the steps of obtaining a pre-constructed improved classification model, wherein the improved classification model is a combined model constructed based on a first classification sub-model, a second classification sub-model and a feature distribution layer, the first classification sub-model is a model for classifying according to click-type behavior features, and the second classification sub-model is a model for classifying according to play conversion-type behavior features;
inputting the first behavior feature set and the second behavior feature set into the improved classification model, selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature distribution layer, sending a first video data set corresponding to the first feature set to the first classification submodel, and performing click classification on the first video data set by using the first classification submodel to obtain a first type number set; and
selecting a second characteristic subset from the first behavior characteristic set and the second behavior characteristic set by using the characteristic distribution layer, sending a second video data set corresponding to the second characteristic subset to the second classification submodel, and performing play conversion classification on the second video data set by using the second classification submodel to obtain a second type quantity set;
and performing weight calculation on the number of each video type in the first type number set and the second type number set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
Optionally, the performing weight calculation on the number of each video type in the first type number set and the second type number set includes:
according to a preset weight configuration rule, the following weight calculation is carried out:
Y (x) =λ 1 *Y (x1)2 *Y (x2)
wherein, the Y is (x) As a number of video types x, Y (x1) Is the number of video types x, Y in the first type number set (x2) Is the number of video types x in the second type number set, lambda 1 Configuring a parameter, λ, for the first weight 2 Configuring parameters for the second weight.
Optionally, the inputting the first behavior feature set and the second behavior feature set into the improved classification model, and selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer includes:
and selecting all behavior characteristics from the first behavior characteristic set through the characteristic distribution layer, and selecting partial behavior characteristics from the second behavior characteristic set as the first characteristic subset.
Optionally, the selecting, by using the feature distribution layer, a second feature subset from the first behavior feature set and the second behavior feature set includes:
and selecting partial behavior characteristics from the first behavior characteristic set and selecting partial behavior characteristics from the second behavior characteristic set as a second characteristic subset through the characteristic distribution layer.
Optionally, the selecting, by the feature allocation layer, a part of behavior features from the first behavior feature set and a part of behavior features from the second behavior feature set as a second feature subset includes:
and selecting behavior characteristics which accord with click conditions from the click behavior characteristics from the first behavior characteristic set through the characteristic distribution layer, and selecting behavior characteristics which accord with preset play conversion conditions from the play conversion behavior characteristics from the second behavior characteristic set as a second characteristic subset.
Optionally, after selecting a video type with a duty ratio greater than a preset value from the weight calculation result as the video type in which the target user is interested, the method further includes:
obtaining the historical playing quantity of videos corresponding to the video type interested by the target user;
dynamically counting each historical playing quantity by using a pre-constructed data visualization template to obtain an interest statistical graph of the target user;
and judging the change trend of the video type interested by the user according to the interest statistical chart, and timely adjusting the category proportion of the video pushed to the target user according to the change trend.
Optionally, before obtaining the pre-constructed improved classification model, the method further includes:
a, acquiring a training sample set containing click behavior characteristics and conversion behavior characteristics;
b, extracting the features of the training sample set by using the improved classification model to be trained to obtain a feature sequence set;
step C, carrying out feature recognition on the feature sequence set by using a full-connection layer of the improved classification model to be trained to obtain a prediction result set;
step D, comparing the prediction result set with preset marks corresponding to the feature sequence set to obtain the accuracy of the prediction result;
and E, judging the size relation between the accuracy and a preset standard threshold, updating the model parameters of the improved classification model to be trained by using a loss function in the improved classification model to be trained when the accuracy is smaller than the standard threshold, returning to the process of the step B, and stopping the training process until the accuracy is larger than or equal to the standard threshold to obtain the improved classification model.
In order to solve the above problem, the present invention further provides a video type obtaining apparatus based on multiple models, the apparatus comprising:
the behavior feature acquisition module is used for acquiring a behavior feature set of a plurality of videos played by a target user, classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the improved classification model obtaining module is used for obtaining a pre-constructed improved classification model, the improved classification model is a combined model constructed based on a first classification submodel, a second classification submodel and a feature distribution layer, the first classification submodel is a model for classifying according to click-type behavior features, and the second classification submodel is a model for classifying according to play conversion type behavior features;
a model execution module, configured to input the first behavior feature set and the second behavior feature set into the improved classification model, select a first feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer, send a first video data set corresponding to the first feature set to the first classification submodel, perform click-type classification on the first video data set by using the first classification submodel to obtain a first type number set, select a second feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer, send a second video data set corresponding to the second feature subset to the second classification submodel, and perform play conversion type classification on the second video data set by using the second classification submodel, obtaining a second type quantity set;
and the category result acquisition module is used for carrying out weight calculation on the number of each video type in the first type number set and the second type number set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executed by the at least one processor to implement the multi-model based video type acquisition method described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the multi-model based video type acquisition method described above.
In the embodiment of the invention, the user behavior feature set comprises click behavior features and play conversion behavior features, and the click behavior features and the play conversion behavior features can be transmitted to a first classification sub-model by utilizing the improved classification model; and transmitting the clicking type behavior characteristic and the playing conversion type behavior characteristic to a second classification sub-model. And partial click behavior characteristics are transmitted to the second classification submodel, so that the data volume of the second classification submodel in the classification process is increased, and the speed of predicting the favorite video category of the target user can be increased. In addition, the embodiment of the invention also introduces the playing conversion type behavior characteristics into the processing of the first classification submodel, so that the number of the videos really interested by the target user is increased in the processing process of the first classification submodel, and the accuracy of the generated result of the first classification submodel is corrected. Therefore, the method and the device can quickly judge the video type which is interested by the user, and further improve the accuracy of pushing the video type which is interested by the user.
Drawings
Fig. 1 is a schematic flowchart of a method for acquiring a video type based on multiple models according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a multi-model-based video type obtaining apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device implementing a multi-model-based video type obtaining method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a video type obtaining method based on multiple models. The executing body of the multi-model-based video type obtaining method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiment of the present application. In other words, the multi-model-based video type acquisition method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a schematic flow chart of a method for acquiring a video type based on multiple models according to an embodiment of the present invention is shown. In an embodiment of the present invention, the method for acquiring a video type based on multiple models includes:
s1, acquiring a behavior feature set of a plurality of videos played by a target user, and classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features.
In the embodiment of the invention, the behavior characteristics refer to actions of a target user when watching videos, the action of clicking one video is defined as a click-type behavior characteristic in the behavior characteristics, the behavior of watching the video for 20 seconds is defined as a play-conversion-type behavior characteristic, and a series of behavior characteristics are generated when watching the video within a preset time period, wherein the time length accounts for 5% if [ click video one, watch for 2 seconds ]; clicking the second video, watching for 2 seconds, wherein the duration accounts for 10%; and clicking the video III, watching for 20 seconds, wherein the time length accounts for 95 percent … … ], wherein the ratio of the watching time to the total time length of the watched video is expressed as the time length accounts for, and the following process replaces the specific watching time with the time length accounts for being used as the behavior characteristic of the playing conversion class.
In the embodiment of the present invention, the click behavior characteristics include characteristics related to clicks such as click times and click frequency, and the play conversion behavior characteristics include characteristics related to play, such as play duration.
In the embodiment of the invention, the [ click video one, click video two, click video three … … ] can be classified as click behavior features, and the [ time length 5%, time length 10%, and time length 95% … … ] can be classified as play conversion behavior features.
S2, obtaining a pre-constructed improved classification model, wherein the improved classification model is a combined model constructed based on a first classification sub-model, a second classification sub-model and a feature distribution layer, the first classification sub-model is a classification model used for carrying out feature distribution according to click-type behavior features, and the second classification sub-model is a classification model used for carrying out feature classification according to play conversion type behavior features.
In the embodiment of the invention, the behavior characteristics of the target user are classified and identified by using the improved classification model combining the first classification submodel and the second classification submodel.
Specifically, for the click behavior feature and the conversion behavior feature, the classification model for performing video type identification according to the click behavior feature and the classification model for performing video type identification according to the play conversion behavior feature are obtained in advance in the embodiment of the present invention and are respectively a first classification sub-model and a second classification sub-model, where the first classification sub-model and the second classification sub-model in the embodiment of the present invention are respectively a pre-trained click rate classification model and a pre-trained play conversion rate classification model. The first classification submodel extracts videos with click behavior characteristics, analyzes the video types of the extracted videos by using a full connection layer in the first classification submodel, counts the number of each video type, and can judge the favorite video types of a target user according to the counting result; the operation flow of the second classification submodel is the same as that of the first classification submodel, but the video acquired by the second classification submodel is the video containing the playing conversion behavior characteristics.
Specifically, the first classification submodel and the second classification submodel both include a mixed layer and a fully connected layer. The mixed layer is used for distinguishing the behavior characteristics to extract different videos, the full-connection layer is used for carrying out classification statistics on the categories of all the videos to obtain the number of all the categories, and the video category with the largest number or the front video category is used as the video category preferred by the target user.
In the embodiment of the invention, after watching a preset number of videos, the behavior feature set of the target user is obtained and is combined into mixed data, the mixed layer of the first classification sub-model only extracts videos corresponding to the click behavior feature, and the mixed layer of the second classification sub-model only extracts videos corresponding to the play conversion behavior feature, so that the videos corresponding to the click behavior feature and the videos corresponding to the play conversion behavior feature are classified. In the embodiment of the present invention, taking the first classification submodel as an example, the full link layer of the first classification submodel may perform video classification judgment on corresponding videos [ video one, video two, and video three … … ] in click type textual features [ click video one, click video two, and click video three … … ] to obtain [ video one, movie + comic "; video II, daily diffusion; video III, national cartoon … …, counting different video types to obtain 30 movies; 58 days are spent; 20, the number of nations is 20; … …).
Further, in this embodiment, the first classification submodel and the second classification submodel are connected through a pre-constructed feature distribution layer to generate an improved classification model.
In an optional embodiment of the present invention, a process of constructing the improved classification model by using the first classification sub-model and the second classification sub-model is as follows:
constructing a feature distribution layer according to a preset feature distribution rule;
inserting the characteristic distribution layer between a mixing layer and a full connection layer of a pre-constructed first classification submodel, and inserting the characteristic distribution layer between a mixing layer and a full connection layer of a pre-constructed second classification submodel to obtain an improved classification model to be trained, which comprises the first classification submodel and the second classification submodel;
and training the improved classification model to be trained by utilizing a pre-constructed training sample set to obtain the improved classification model.
In detail, in the embodiment of the present invention, the constructing a feature allocation layer according to a preset feature allocation rule includes:
constructing a first data transmission channel for transmitting the first behavior characteristic but not the second behavior characteristic;
constructing a second data transmission channel which does not transmit the first behavior characteristic but transmits the second behavior characteristic;
constructing a third data transmission channel for transmitting the first behavior characteristic and the second behavior;
constructing a fourth data transmission channel which does not transmit the first behavior characteristic and the second behavior characteristic;
and according to a preset feature distribution rule, carrying out hierarchical arrangement and combination on the first data transmission channel, the second data transmission channel, the third data transmission channel and the fourth data transmission channel to obtain a feature distribution layer.
Specifically, according to the feature allocation rule, the embodiment of the present invention constructs a first layer to a third layer from bottom to top to complete the construction of the feature allocation layer, where the first layer includes a first data transmission channel, a second data transmission channel, and a fourth data transmission channel, the second layer includes a third data transmission channel and a second data transmission channel, and the third layer also includes a third data transmission channel and a second data transmission channel
In the embodiment of the present invention, the first data transmission channel of the first layer is connected to the mixed layer of the first classification submodel, the second data transmission channel of the first layer is connected to the mixed layer of the second classification submodel, the third data transmission channel of the third layer is connected to the fully connected layer of the first classification submodel, and the third data transmission channel of the third layer is connected to the fully connected layer of the second classification submodel.
After the improved classification model is constructed, model training is required, so that the improved classification model has higher accuracy.
In detail, in the embodiment of the present invention, before the obtaining of the pre-constructed improved classification model, the method further includes:
and A, acquiring a training sample set containing click behavior characteristics and conversion behavior characteristics.
The training sample set is a mixed sample set of click characteristic data and conversion behavior characteristics, such as [ click video one, click video two, click video three, duration percentage 78%, click video four … … ].
And B, performing feature extraction on the training sample set by using the improved classification model to be trained to obtain a feature sequence set.
Extracting a sample from the training sample set, and if [ click video one ], extracting features of [ click video one ] into [ video one: adventure class, cartoon class … … ].
Step C, carrying out feature recognition on the feature sequence set by using a full-connection layer of the improved classification model to be trained to obtain a prediction result set;
step D, comparing the prediction result set with a preset label corresponding to the characteristic sequence set to obtain the accuracy of the prediction result;
and E, judging the size relation between the accuracy and a preset standard threshold, updating the model parameters of the improved classification model to be trained by using a loss function in the improved classification model to be trained when the accuracy is smaller than the standard threshold, returning to the process of the step B, and stopping the training process until the accuracy is larger than or equal to the standard threshold to obtain the improved classification model.
The loss function is used for estimating the difference between the predicted value and the true value, namely the loss value, obtained by the constructed model, such as the improved classification to be trained. The model parameters of the to-be-trained improved classification model can be updated by the to-be-trained improved model by utilizing the loss value, so that the activation function is more and more accurate. And the model parameters of the improved classification model to be trained are variable parameters of an activation function in the full-connection layer.
The loss value and the accuracy have a corresponding relation, the accuracy of the improved classification model to be trained can be gradually improved through the operation steps from the step A to the step E, the loss value is smaller and smaller, when the accuracy reaches the standard threshold value, the loss value also reaches the minimum state, so that the change of the variable parameter is not large, the training process is completed, and the improved classification model is obtained.
S3, inputting the first behavior feature set and the second behavior feature set into the improved classification model, selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature distribution layer, sending a first video data set corresponding to the first feature set to the first classification sub-model, and performing click classification on the first video data set by using the first classification sub-model to obtain a first type number set.
Further, in this embodiment of the present invention, the inputting the first behavior feature set and the second behavior feature set into the improved classification model, and selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer includes:
and selecting all behavior characteristics from the first behavior characteristic set through the characteristic distribution layer, and selecting partial behavior characteristics from the second behavior characteristic set as a first characteristic subset.
Further, the selecting, by the feature allocation layer, all behavior features from the first behavior feature set, and selecting some behavior features from the second behavior feature set as a first feature subset includes:
and selecting all behavior characteristics from the first behavior characteristic set through the characteristic distribution layer, and selecting behavior characteristics of which the play conversion type behavior characteristics meet preset play conversion conditions from the second behavior characteristic set as the first characteristic subset.
S4, selecting a second feature subset from the first behavior feature set and the second behavior feature set by using the feature distribution layer, sending the second feature subset to the second classification submodel, and performing play conversion classification on a second video play set corresponding to the second feature subset by using the second classification submodel to obtain a second type quantity set.
Further, in this embodiment of the present invention, the selecting, by using the feature distribution layer, a second feature subset from the first behavior feature set and the second behavior feature set includes:
and selecting partial behavior characteristics from the first behavior characteristic set and selecting partial behavior characteristics from the second behavior characteristic set as a second characteristic subset through the characteristic distribution layer.
Further, the selecting, by the feature allocation layer, a part of behavior features from the first behavior feature set and a part of behavior features from the second behavior feature set as a second feature subset, and sending the second feature subset to the second classification submodel includes:
and selecting behavior characteristics of which the click behavior characteristics accord with click conditions from the first behavior characteristic set through the characteristic distribution layer, and selecting behavior characteristics of which the play conversion behavior characteristics accord with preset play conversion conditions from the second behavior characteristic set as a second characteristic subset.
For example, if there is a behavior feature set of 100 videos, where the first behavior feature set is that the click-type behavior feature of 90 videos is clicked 1 time, the click-type behavior feature of 10 videos is clicked 3 times, the second behavior feature set is that the play duration of 40 videos is more than 70%, and the play market proportion of 60 videos is less than 70%, then the behavior features of 90 videos and the behavior features of 40 videos are selected as the first feature subset, and the behavior features of 10 videos and the behavior features of 40 videos are selected as the second feature subset.
In this embodiment, the obtained first type data set and the second type data set are statistics of each type of video. For example, a first type of data set includes: for the number of each category [ movie 50, daily roaming 30, national roaming 20 ], we convert into [ movie 50%, daily roaming 30%, national roaming 20% ] according to the number of each type of video. While a second type of data set may include [ movie 26, day diffuse 29, state diffuse 10 ] may translate to [ movie 40%, day diffuse 46%, state diffuse 14% ].
S5, carrying out weight calculation on the first type quantity set and the second type quantity set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
Further, in this embodiment of the present invention, the performing weight calculation on the number of each video type in the first type number set and the second type number set includes:
according to a preset weight configuration rule, the following weight calculation is carried out:
Y (x) =λ 1 *Y (x1)2 *Y (x2)
wherein, the Y is (x) Is the number of video types x, Y (x1) Is the number of video types x, Y in the first type number set (x2) Is the number of video types x in the second type number set, lambda 1 Configuring a parameter, λ, for the first weight 2 Configuring parameters for the second weight.
Further, in an embodiment of the present invention, after selecting, from the weight calculation result, a video type with a duty ratio greater than a preset value as the video type in which the target user is interested, the method further includes:
obtaining the historical playing quantity of videos corresponding to the video types interested by the target user;
dynamically counting each historical playing quantity by using a pre-constructed data visualization template to obtain an interest statistical graph of the target user;
and judging the change trend of the video type interested by the user according to the interest statistical chart, and timely adjusting the category proportion of the video pushed to the target user according to the change trend.
In this embodiment, the historical playing number is the playing number of videos of different types of interest.
For example, if the type of the video of interest played by the user half a year ago is mostly a history record type, and the type of the video of interest played by the user within one month is mostly a cartoon type, a higher proportion of cartoon type videos will be pushed to the user.
The data visualization template is a program capable of being automatically executed, the playing quantity of each interested video is imported into the data visualization template, and the playing quantity can be automatically compiled through codes in the data visualization template to obtain the interest statistical graph in a graphic format.
In the embodiment, after the interest statistical graph which is interested by the user is obtained, the type trend which is interested by the user can be predicted, and the accuracy of future video push is further improved.
In the embodiment of the invention, the user behavior feature set comprises click behavior features and play conversion behavior features, and the click behavior features and the play conversion behavior features can be transmitted to a first classification sub-model by utilizing the improved classification model; and transmitting the clicking type behavior characteristic and the playing conversion type behavior characteristic to a second classification sub-model. And partial click behavior characteristics are transmitted to the second classification submodel, so that the data volume of the second classification submodel in the classification process is increased, and the speed of predicting the favorite video category of the target user can be increased. In addition, the embodiment of the invention also introduces the playing conversion type behavior characteristics into the processing of the first classification submodel, so that the number of the videos really interested by the target user is increased in the processing process of the first classification submodel, and the accuracy of the generated result of the first classification submodel is corrected. Therefore, the method and the device can quickly judge the video type which is interested by the user, and further improve the accuracy of pushing the video type which is interested by the user.
Fig. 2 is a functional block diagram of the video type acquiring apparatus based on multiple models according to the present invention.
The multi-model-based video type acquisition apparatus 100 of the present invention may be installed in an electronic device. According to the implemented functions, the multi-model-based video type obtaining device may include a behavior feature obtaining module 101, an improved classification model obtaining module 102, an obtaining model executing module 103, and a classification result obtaining module 104. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the behavior feature acquisition module 101 is configured to acquire a behavior feature set of multiple videos played by a target user, and classify the behavior feature set to obtain a first behavior feature set and a second behavior feature set, where the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the improved classification model obtaining module 102 is configured to obtain a pre-constructed improved classification model, where the improved classification model is a combined model constructed based on a first classification submodel, a second classification submodel and a feature distribution layer, the first classification submodel is a model for performing classification according to click-type behavior features, and the second classification submodel is a model for performing classification according to play-conversion-type behavior features;
the model executing module 103 is configured to input the first behavior feature set and the second behavior feature set into the improved classification model, select a first feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer, and sending the first video data set corresponding to the first feature set to the first classification submodel, and using the first video data set to perform click type classification to obtain a first type quantity set, and using the feature distribution layer to select a second feature subset from the first behavior feature set and the second behavior feature set, sending a second video data set corresponding to the second feature subset to the second classification submodel, and performing play conversion classification on the second video data set by using the second classification submodel to obtain a second type quantity set;
the category result obtaining module 104 is configured to perform weight calculation on the number of each video type in the first type number set and the second type number set, and select, from a result of the weight calculation, a video type with a occupation ratio greater than a preset value as the video type that the target user is interested in.
In detail, when the modules in the apparatus 100 for acquiring a video type based on multiple models according to the embodiment of the present invention are used, the same technical means as the method for acquiring a video type based on multiple models shown in fig. 1 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 2 is a schematic structural diagram of an electronic device implementing a multi-model-based video type acquisition method according to the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a multi-model based video type acquisition program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (for example, executing a multi-model-based video type acquisition program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a multi-model-based video type acquisition program, etc., but also to temporarily store data that has been output or is to be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 3 shows only an electronic device with components, and those skilled in the art will appreciate that the structure shown in fig. 3 is not limiting to the electronic device, and may include fewer or more components than shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The multi-model based video type acquisition program stored in the memory 11 of the electronic device is a combination of a plurality of computer programs, which when executed in the processor 10, can implement:
the method comprises the steps of obtaining a behavior feature set of a target user playing a plurality of videos, and classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the method comprises the steps of obtaining a pre-constructed improved classification model, wherein the improved classification model is a combined model constructed based on a first classification submodel, a second classification submodel and a feature distribution layer, the first classification submodel is a model for classifying according to click-type behavior features, and the second classification submodel is a model for classifying according to play conversion type behavior features;
inputting the first behavior feature set and the second behavior feature set into the improved classification model, selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature distribution layer, sending a first video data set corresponding to the first feature set to the first classification submodel, and performing click classification on the first video data set by using the first classification submodel to obtain a first type number set; and
selecting a second characteristic subset from the first behavior characteristic set and the second behavior characteristic set by using the characteristic distribution layer, sending a second video data set corresponding to the second characteristic subset to the second classification submodel, and performing play conversion classification on the second video data set by using the second classification submodel to obtain a second type quantity set;
and performing weight calculation on the number of each video type in the first type number set and the second type number set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a non-volatile computer-readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
the method comprises the steps of obtaining a behavior feature set of a target user playing a plurality of videos, and classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the method comprises the steps of obtaining a pre-constructed improved classification model, wherein the improved classification model is a combined model constructed based on a first classification submodel, a second classification submodel and a feature distribution layer, the first classification submodel is a model for classifying according to click-type behavior features, and the second classification submodel is a model for classifying according to play conversion type behavior features;
inputting the first behavior feature set and the second behavior feature set into the improved classification model, selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature distribution layer, sending a first video data set corresponding to the first feature set to the first classification submodel, and performing click classification on the first video data set by using the first classification submodel to obtain a first type number set; and
selecting a second characteristic subset from the first behavior characteristic set and the second behavior characteristic set by using the characteristic distribution layer, sending a second video data set corresponding to the second characteristic subset to the second classification submodel, and performing play conversion classification on the second video data set by using the second classification submodel to obtain a second type quantity set;
and performing weight calculation on the number of each video type in the first type number set and the second type number set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A video type obtaining method based on multiple models is characterized by comprising the following steps:
the method comprises the steps of obtaining a behavior feature set of a target user playing a plurality of videos, and classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the method comprises the steps of obtaining a pre-constructed improved classification model, wherein the improved classification model is a combined model constructed based on a first classification submodel, a second classification submodel and a feature distribution layer, the first classification submodel is a model for classifying according to click-type behavior features, and the second classification submodel is a model for classifying according to play conversion type behavior features;
inputting the first behavior feature set and the second behavior feature set into the improved classification model, selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature distribution layer, sending a first video data set corresponding to the first feature subset to the first classification submodel, and performing click classification on the first video data set by using the first classification submodel to obtain a first type number set; and
selecting a second characteristic subset from the first behavior characteristic set and the second behavior characteristic set by using the characteristic distribution layer, sending a second video data set corresponding to the second characteristic subset to the second classification submodel, and performing play conversion classification on the second video data set by using the second classification submodel to obtain a second type quantity set;
and performing weight calculation on the number of each video type in the first type number set and the second type number set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
2. The method for acquiring multiple model-based video types according to claim 1, wherein the weighting the number of each video type in the first type number set and the second type number set comprises:
according to a preset weight configuration rule, the following weight calculation is carried out:
Y (x) =λ 1 *Y (x1)2 *Y (x2)
wherein, the Y is (x) As a number of video types x, Y (x1) Is the number of video types x, Y in the first type number set (x2) Is the number of video types x in the second type number set, lambda 1 Configuring a parameter, λ, for the first weight 2 Configuring parameters for the second weight.
3. The method according to claim 1, wherein the inputting the first behavior feature set and the second behavior feature set into the improved classification model, and the selecting a first feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer comprises:
and selecting all behavior characteristics from the first behavior characteristic set through the characteristic distribution layer, and selecting partial behavior characteristics from the second behavior characteristic set as the first characteristic subset.
4. The method as claimed in claim 1, wherein said selecting a second feature subset from said first behavior feature set and said second behavior feature set by said feature distribution layer comprises:
and selecting part of behavior characteristics from the first behavior characteristic set and selecting part of behavior characteristics from the second behavior characteristic set as a second characteristic subset through the characteristic distribution layer.
5. The method as claimed in claim 4, wherein said selecting, by said feature allocation layer, a portion of behavior features from said first behavior feature set and a portion of behavior features from said second behavior feature set as a second feature subset comprises:
and selecting behavior characteristics which accord with click conditions from the click behavior characteristics from the first behavior characteristic set through the characteristic distribution layer, and selecting behavior characteristics which accord with preset play conversion conditions from the play conversion behavior characteristics from the second behavior characteristic set as a second characteristic subset.
6. The method for acquiring multi-model-based video types according to claim 1, wherein after selecting a video type with a duty ratio greater than a preset value from the weight calculation results as the video type of interest to the target user, the method further comprises:
obtaining the historical playing quantity of videos corresponding to the video types interested by the target user;
dynamically counting each historical playing quantity by using a pre-constructed data visualization template to obtain an interest statistical graph of the target user;
and judging the change trend of the video type interested by the user according to the interest statistical chart, and timely adjusting the category proportion of the video pushed to the target user according to the change trend.
7. The multi-model based video type acquisition method according to any one of claims 1 to 6, wherein before the acquiring a pre-constructed improved classification model, the method further comprises:
a, acquiring a training sample set containing click behavior characteristics and conversion behavior characteristics;
b, extracting the features of the training sample set by using an improved classification model to be trained to obtain a feature sequence set;
step C, carrying out feature recognition on the feature sequence set by using a full-connection layer of the improved classification model to be trained to obtain a prediction result set;
step D, comparing the prediction result set with a preset mark corresponding to the feature sequence set to obtain the accuracy of the prediction result;
and E, judging the size relation between the accuracy and a preset standard threshold, updating the model parameters of the improved classification model to be trained by using a loss function in the improved classification model to be trained when the accuracy is smaller than the standard threshold, returning to the process of the step B, and stopping the training process until the accuracy is larger than or equal to the standard threshold to obtain the improved classification model.
8. An apparatus for acquiring a video type based on multiple models, the apparatus comprising:
the behavior feature acquisition module is used for acquiring a behavior feature set of a plurality of videos played by a target user, classifying the behavior feature set to obtain a first behavior feature set and a second behavior feature set, wherein the first behavior feature set is a set of click behavior features, and the second behavior feature set is a set of play conversion behavior features;
the improved classification model obtaining module is used for obtaining a pre-constructed improved classification model, the improved classification model is a combined model constructed based on a first classification submodel, a second classification submodel and a feature distribution layer, the first classification submodel is a model for classifying according to click-type behavior features, and the second classification submodel is a model for classifying according to play conversion type behavior features;
a model execution module, configured to input the first behavior feature set and the second behavior feature set into the improved classification model, select a first feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer, send a first video data set corresponding to the first feature subset to the first classification submodel, perform click classification on the first video data set by using the first classification submodel to obtain a first type number set, select a second feature subset from the first behavior feature set and the second behavior feature set by using the feature allocation layer, send a second video data set corresponding to the second feature subset to the second classification submodel, and perform play conversion classification on the second video data set by using the second classification submodel, obtaining a second type quantity set;
and the category result acquisition module is used for carrying out weight calculation on the number of each video type in the first type number set and the second type number set, and selecting the video type with the occupation ratio larger than a preset value from the weight calculation result as the video type interested by the target user.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the multi-model based video type acquisition method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the method for acquiring multiple model-based video types according to any one of claims 1 to 7.
CN202110607865.4A 2021-06-01 2021-06-01 Video type obtaining method and device based on multiple models and electronic equipment Active CN113220941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110607865.4A CN113220941B (en) 2021-06-01 2021-06-01 Video type obtaining method and device based on multiple models and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110607865.4A CN113220941B (en) 2021-06-01 2021-06-01 Video type obtaining method and device based on multiple models and electronic equipment

Publications (2)

Publication Number Publication Date
CN113220941A CN113220941A (en) 2021-08-06
CN113220941B true CN113220941B (en) 2022-08-02

Family

ID=77082119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110607865.4A Active CN113220941B (en) 2021-06-01 2021-06-01 Video type obtaining method and device based on multiple models and electronic equipment

Country Status (1)

Country Link
CN (1) CN113220941B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095431A (en) * 2015-07-22 2015-11-25 百度在线网络技术(北京)有限公司 Method and device for pushing videos based on behavior information of user
CN109376603A (en) * 2018-09-25 2019-02-22 北京周同科技有限公司 A kind of video frequency identifying method, device, computer equipment and storage medium
CN109729392A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 The method, apparatus and storage medium of video push
CN111460293A (en) * 2020-03-30 2020-07-28 招商局金融科技有限公司 Information pushing method and device and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11074495B2 (en) * 2013-02-28 2021-07-27 Z Advanced Computing, Inc. (Zac) System and method for extremely efficient image and pattern recognition and artificial intelligence platform
US10685236B2 (en) * 2018-07-05 2020-06-16 Adobe Inc. Multi-model techniques to generate video metadata
KR102126561B1 (en) * 2018-07-23 2020-06-24 주식회사 쓰리아이 Method and system for adaptive 3D space generating
US20210133213A1 (en) * 2019-10-31 2021-05-06 Vettd, Inc. Method and system for performing hierarchical classification of data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095431A (en) * 2015-07-22 2015-11-25 百度在线网络技术(北京)有限公司 Method and device for pushing videos based on behavior information of user
CN109376603A (en) * 2018-09-25 2019-02-22 北京周同科技有限公司 A kind of video frequency identifying method, device, computer equipment and storage medium
CN109729392A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 The method, apparatus and storage medium of video push
CN111460293A (en) * 2020-03-30 2020-07-28 招商局金融科技有限公司 Information pushing method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN113220941A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN110727868B (en) Object recommendation method, device and computer-readable storage medium
CN112380859A (en) Public opinion information recommendation method and device, electronic equipment and computer storage medium
CN112541745A (en) User behavior data analysis method and device, electronic equipment and readable storage medium
CN114398560B (en) Marketing interface setting method, device, equipment and medium based on WEB platform
CN110035302A (en) Information recommendation and model training method and device calculate equipment, storage medium
CN113360768A (en) Product recommendation method, device and equipment based on user portrait and storage medium
CN113761253A (en) Video tag determination method, device, equipment and storage medium
CN115018588A (en) Product recommendation method and device, electronic equipment and readable storage medium
CN113868528A (en) Information recommendation method and device, electronic equipment and readable storage medium
CN113268665A (en) Information recommendation method, device and equipment based on random forest and storage medium
CN113220941B (en) Video type obtaining method and device based on multiple models and electronic equipment
CN112948526A (en) User portrait generation method and device, electronic equipment and storage medium
CN109697224B (en) Bill message processing method, device and storage medium
CN114238777B (en) Negative feedback flow distribution method, device, equipment and medium based on behavior analysis
CN115757952A (en) Content information recommendation method, device, equipment and storage medium
CN114461630A (en) Intelligent attribution analysis method, device, equipment and storage medium
CN111182354B (en) Video scoring recommendation method, device and equipment and computer readable storage medium
CN113591881A (en) Intention recognition method and device based on model fusion, electronic equipment and medium
CN115344774A (en) User account screening method and device and server
CN113688923A (en) Intelligent order abnormity detection method and device, electronic equipment and storage medium
CN108510071B (en) Data feature extraction method and device and computer readable storage medium
CN111860661A (en) Data analysis method and device based on user behavior, electronic equipment and medium
CN113297486B (en) Click rate prediction method and related device
CN114418155A (en) Processing method, device, equipment and medium for rating card training
CN113469519A (en) Attribution analysis method and device of business event, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant