CN117156184A - Intelligent video playing method, device, equipment and storage medium - Google Patents
Intelligent video playing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN117156184A CN117156184A CN202311013762.0A CN202311013762A CN117156184A CN 117156184 A CN117156184 A CN 117156184A CN 202311013762 A CN202311013762 A CN 202311013762A CN 117156184 A CN117156184 A CN 117156184A
- Authority
- CN
- China
- Prior art keywords
- video
- model
- fusion
- preset
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000004927 fusion Effects 0.000 claims abstract description 148
- 238000011156 evaluation Methods 0.000 claims abstract description 97
- 238000012549 training Methods 0.000 claims abstract description 67
- 238000007499 fusion processing Methods 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4662—Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
- H04N21/4666—Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms using neural networks, e.g. processing the feedback provided by the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses an intelligent video playing method, device, equipment and storage medium, wherein the intelligent video playing method comprises the following steps: acquiring multi-dimensional characteristic information of a user; based on a preset fusion model, performing multi-mode fusion processing on the multi-dimensional characteristic information to obtain target fusion characteristic information; performing preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value; and determining a video delivery list based on the user preference evaluation value, and playing corresponding videos to the user according to the video delivery list. According to the method, the multi-mode fusion is carried out on the multi-dimensional characteristic information of the user, the multi-dimensional historical video information of the user is considered, the target fusion characteristic information is determined, the preference evaluation is carried out on the target fusion characteristic information according to the preference prediction model which is completed through the pre-training, so that a corresponding video delivery list is generated, video delivery and playing are realized, and the accuracy of video delivery is improved.
Description
Technical Field
The present application relates to the field of video delivery technologies, and in particular, to a method, an apparatus, a device, and a storage medium for intelligently delivering video.
Background
With the development of mobile terminals and video playback technologies, smartphones and video platforms are commonly used. At present, a smart phone user often uses a video platform APP (application program) to watch video on the mobile phone so as to meet the learning or entertainment requirements of the user.
In order to further enhance the viscosity of a user, in the prior art, a large data algorithm is generally adopted to push the video conforming to the user image to the user according to the video watched by the historical user, so that the user can watch the corresponding video on the video platform for a long time. However, the method only considers the dimension of the historical video watching type, and video delivery is carried out on the user, so that the accuracy of video delivery is low.
Disclosure of Invention
The application mainly aims to provide an intelligent video delivery playing method, device, equipment and storage medium, and aims to solve the technical problem of low video delivery accuracy in the prior art.
In order to achieve the above object, the present application provides an intelligent video playing method, which includes:
Acquiring multi-dimensional characteristic information of a user;
based on a preset fusion model, performing multi-mode fusion processing on the multi-dimensional characteristic information to obtain target fusion characteristic information;
performing preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
and determining a video delivery list based on the user preference evaluation value, and playing corresponding videos to the user according to the video delivery list.
Optionally, the multi-dimensional feature information includes a viewing frequency, a viewing duration, a video type, and a viewing time point of the historical video.
Optionally, the step of determining a video delivery list based on the user preference value includes:
acquiring a playable video set;
performing preference evaluation calculation on each video in the playable video set to obtain a video preference evaluation value of each video in the playable video set;
comparing the visual frequency deviation good evaluation value with the user preference evaluation value to obtain a comparison result;
and selecting target videos with the video preference evaluation value larger than or equal to the user preference evaluation value from the comparison result, and forming a video delivery list based on the target videos.
Optionally, before the step of acquiring the multi-dimensional feature information of the user, the method includes:
acquiring a fusion characteristic sample and a user preference value label of the fusion characteristic sample;
and performing iterative training on a preset first model to be trained based on the fusion feature sample and the user preference evaluation value label of the fusion feature sample to obtain a preference prediction model meeting the precision condition.
Optionally, the step of iteratively training a preset first model to be trained based on the fused feature sample and the user preference value label of the fused feature sample to obtain a preference prediction model meeting the accuracy condition includes:
inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference evaluation value;
performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
and if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference value, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a preference prediction model meeting the precision condition.
Optionally, before the step of acquiring the multi-dimensional feature information of the user, the method includes:
acquiring a multi-dimensional characteristic sample;
based on the multi-dimensional characteristic sample, carrying out combined training on a preset second model to be trained and a preset first model to be trained to obtain a fusion model and a preference prediction model which meet the precision condition;
the second model to be trained is an initial training model of the fusion model, and the first model to be trained is an initial training model of the preference prediction model.
Optionally, the step of performing joint training on the preset second model to be trained and the preset first model to be trained based on the multidimensional feature sample to obtain a fusion model and a preference prediction model which meet the precision condition includes:
inputting the multi-dimensional feature sample into a second preset model to be trained to obtain prediction fusion feature information;
inputting the prediction fusion characteristic information into a preset first model to be trained to obtain a predicted user preference value;
performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
Based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
and if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the multidimensional feature sample into a preset second model to be trained to obtain predicted fusion feature information, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a fusion model and a preference prediction model which meet the precision condition.
The application also provides an intelligent video playing device, which comprises:
the acquisition module is used for acquiring the multidimensional characteristic information of the user;
the fusion module is used for carrying out multi-mode fusion processing on the multi-dimensional characteristic information based on a preset fusion model to obtain target fusion characteristic information;
the evaluation module is used for carrying out preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
and the playing module is used for determining a video playing list based on the user preference value and playing corresponding videos to the user according to the video playing list.
The application also provides intelligent video playing equipment, which comprises: a memory, a processor and a program stored on the memory for implementing the intelligent delivery video playing method,
the memory is used for storing a program for realizing the intelligent video playing method;
the processor is used for executing a program for realizing the intelligent delivery video playing method so as to realize the steps of the intelligent delivery video playing method.
The application also provides a storage medium, wherein the storage medium is stored with a program for realizing the intelligent video playing method, and the program for realizing the intelligent video playing method is executed by a processor to realize the steps of the intelligent video playing method.
Compared with the prior art that only the dimension of the historical video watching type is considered, the intelligent video playing method, device, equipment and storage medium are used for carrying out video playing on the user, and the accuracy of video playing is low; based on a preset fusion model, performing multi-mode fusion processing on the multi-dimensional characteristic information to obtain target fusion characteristic information; performing preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value; and determining a video delivery list based on the user preference evaluation value, and playing corresponding videos to the user according to the video delivery list. In the application, the multi-mode fusion is carried out on the multi-dimensional characteristic information of the user, the multi-dimensional historical video information of the user is considered, the target fusion characteristic information is determined, and the target fusion characteristic information is subjected to preference evaluation according to the preference prediction model which is completed through pre-training, so that a corresponding video delivery list is generated, intelligent video delivery playing is realized, and the video delivery accuracy is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a device architecture of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a flowchart of a first embodiment of the intelligent delivery video playing method of the present application;
fig. 3 is a schematic block diagram of the intelligent video playing device according to the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal structure of a hardware running environment according to an embodiment of the present application.
The terminal of the embodiment of the application can be a PC, or can be a mobile terminal device with a display function, such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 4) player, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating device, a network communication module, a user interface module, and an intelligent delivery video play program may be included in a memory 1005 as a computer storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke the smart delivery video playback program stored in the memory 1005.
Referring to fig. 2, an embodiment of the present application provides an intelligent delivery video playing method, which includes:
step S100, obtaining multi-dimensional characteristic information of a user;
step S200, performing multi-mode fusion processing on the multi-dimensional characteristic information based on a preset fusion model to obtain target fusion characteristic information;
step S300, carrying out preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
step S400, a video delivery list is determined based on the user preference value, and corresponding videos are played to the user according to the video delivery list.
In this embodiment, the application scenario aimed at is:
as an example, a scenario in which a smart delivery video plays may be that a user uses a smart terminal to perform long-time multi-video viewing on a video platform. In the related art, a big data algorithm is adopted to push video conforming to user images to users according to video watched by historical users, so that the users can watch corresponding video on a video platform for a long time. However, the method only considers the dimension of the historical viewing record, and video delivery is carried out on the user, so that the accuracy of video delivery is low. Aiming at the scene, the intelligent video playing method of the embodiment carries out multi-mode fusion on the multi-dimensional characteristic information of the user, considers the multi-dimensional historical video information of the user, determines target fusion characteristic information, carries out preference evaluation on the target fusion characteristic information according to the preference prediction model which is completed through pre-training, and generates a corresponding video playing list, so that intelligent video playing is realized, and the accuracy of video playing is improved.
As an example, the application scenario of intelligent video playing is not only long-time multi-video watching on the video platform by using the intelligent terminal for the above-mentioned user, but also various intelligent video playing scenarios, which are not limited herein.
The present embodiment aims at: and the accuracy of video delivery is improved.
In this embodiment, the intelligent delivery video playing method is applied to the intelligent delivery video playing device.
The method comprises the following specific steps:
step S100, obtaining multi-dimensional characteristic information of a user;
in this embodiment, the multi-dimensional feature information includes, but is not limited to, a viewing frequency, a viewing duration, a video type and a viewing time point of a historical video, where the video type of the historical video refers to a type of a video that a user views on a video platform, the video type may be determined according to a tag set corresponding to the video platform, and the video tag may be a tag type of a wide range, such as a game type tag, a sports type tag, a music type tag, a food type tag, a cartoon type tag, or a tag type of a small range, such as a football type tag, a basketball type tag, a tennis type tag, which is not specifically limited herein; the viewing frequency of the historical video refers to the frequency and the times of the video watched by the user on the video platform, for example, the times of the historical video B watched by the user A are 10 times; the viewing duration of the historical video refers to the viewing duration of the video watched by the user on the video platform, for example, the full length of the video B is 10 minutes, and the duration of the historical video B watched by the user A is 3 minutes; the viewing time point of the historical video refers to a time point when each video watched by the user on the video platform is watched, and is used for analyzing user portraits of video contents watched by the user in different time periods.
In this embodiment, the manner in which the device obtains the multi-dimensional feature information of the user may be to receive the multi-dimensional feature information sent by the user, or may be to read the related multi-dimensional feature information in the user history database stored in the video platform, which is not limited herein specifically.
Step S200, performing multi-mode fusion processing on the multi-dimensional characteristic information based on a preset fusion model to obtain target fusion characteristic information;
in this embodiment, the device performs multi-mode fusion processing on the multi-dimensional feature information based on a preset fusion model to obtain target fusion feature information, where the fusion model may be obtained by training by using a deep neural network learning method of reinforcement learning, or may be obtained by training by using a supervised or unsupervised deep neural network learning method, which is not specifically limited herein. The training of the fusion model can be obtained by performing independent training in a reinforcement learning, or supervised or unsupervised mode according to a multidimensional characteristic sample (training sample); or can be obtained by carrying out combined training with a decision model according to the multi-dimensional characteristic sample. Specifically, training materials (multidimensional feature samples) of the fusion model are derived from historical video recording data of a plurality of users, and the historical video recording data comprises the watching frequency, watching duration, video type and watching time point of the historical video. The multi-mode fusion is based on the source characteristics of the information, and a mixed fusion mode of feature fusion and decision fusion can be adopted to perform multi-mode fusion processing on the multi-dimensional information so as to obtain target fusion feature information.
In this embodiment, the device determines the weight (influence) of each dimension feature information in the process of performing the multi-mode fusion processing on the multi-dimension feature information based on the fusion model, for example, the more the video type a is watched, the greater the influence of the type a video on the user preference value; for example, the longer the viewing duration of video type a, the greater the impact of that type a video on the user's preference score; for example, the watching time points of the video type A are distributed at 7-10 night points, so that the influence of the type A video under the time points on the preference value of the user is larger, the application is based on the consideration that a plurality of users watch different types of videos at different time periods and add the dimension of the time points, for example, a plurality of users watch short videos of news or entertainment types only in daytime and watch long videos of sports such as football or basketball and the like which need to be played for a long time in evening, the intelligent video playing device can play corresponding types of videos to the users according to the time point of watching the videos of the current users, for example, the watching time points of the video type A in the historical video records of the users are distributed at 7-10 night points, so that when the users open the video platform at 7-10 night points, the device pushes the video of the video type A to the users, thereby improving the video playing accuracy and further improving the user viscosity.
Step S300, carrying out preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
in this embodiment, the device performs preference evaluation on the target fusion feature information based on a preset preference prediction model to obtain a user preference evaluation value, where the preference prediction model is based on the fusion feature sample and a user preference evaluation value label of the fusion feature sample, and performs iterative training on a preset first model to be trained to obtain a preference prediction model with a precision condition, and the user preference evaluation value refers to preference degree of reflecting a user on different types of videos at different time points.
Step S400, a video delivery list is determined based on the user preference value, and corresponding videos are played to the user according to the video delivery list.
In this embodiment, the device determines a video delivery list based on the user preference value, and plays corresponding videos to the user according to the video delivery list, where the video delivery list refers to a list that includes a plurality of videos interested by the user and is pushed to the user by the device, and the user may automatically deliver relevant videos to the user through selection on a designated interface, or according to a video selection method set by the platform, for example, the user may automatically play videos in the video delivery list through pulling down the videos by the platform.
Specifically, the step S400 includes the following steps S410 to S440:
step S410, obtaining a playable video set;
in this embodiment, the device obtains a playable video set, where the playable video set may be a set of all playable videos under the video platform, or may be obtained by further user preference screening of the set of all playable videos under the video platform.
In this embodiment, the manner in which the device obtains the playable video set may be that the device receives all the playable video sets sent by the video platform, or may be that after the device receives all the playable video sets sent by the video platform, the device performs user preference screening on all the playable video sets to obtain the playable video set. For example, the video records of the user include football event videos, basketball event videos and table tennis event videos, the device receives all playable video sets sent by the video platform and then performs preference screening, and football event videos, basketball event videos and table tennis event videos in all playable video sets are reserved as playable video sets.
Step S420, carrying out preference evaluation calculation on each video in the playable video set to obtain a video preference evaluation value of each video in the playable video set;
In this embodiment, the device performs preference evaluation calculation on each video in the playable video set to obtain a video frequency bias good evaluation value of each video in the playable video set, where the video representation with a larger video preference evaluation value is closer to the video with a user preference, and the video representation with a smaller video preference evaluation value is farther from the video with a user preference, that is, the device recommends a video similar to the user preference to the user by analyzing the user characteristics, so as to improve the experience of the user.
Step S430, comparing the video frequency bias good value with the user preference good value to obtain a comparison result;
in this embodiment, the device compares the video preference evaluation value with the user preference evaluation value to obtain a comparison result, where the comparison result includes a target video whose video preference evaluation value is greater than or equal to the user preference evaluation value, or a target video whose video preference evaluation value is less than the user preference evaluation value, where the target video whose video preference evaluation value is greater than or equal to the user preference evaluation value indicates that the video meets user preferences, and the higher the video preference evaluation value, the more the video meets user preferences.
Step S440, selecting a target video with the video preference evaluation value greater than or equal to the user preference evaluation value in the comparison result, and forming a video delivery list based on the target video.
In this embodiment, the device selects a target video with the video preference evaluation value greater than or equal to the user preference evaluation value in the comparison result, and forms a video delivery list based on the target video, where the video delivery list may be ranked according to the user preference evaluation value, for example, the higher the user preference evaluation value in the target video, the higher the video ranking, the later the lower the user preference evaluation value in the target video.
Compared with the prior art that only the dimension of the historical video watching type is considered, the intelligent video playing method, device, equipment and storage medium are used for carrying out video playing on the user, and the accuracy of video playing is low; based on a preset fusion model, performing multi-mode fusion processing on the multi-dimensional characteristic information to obtain target fusion characteristic information; performing preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value; and determining a video delivery list based on the user preference evaluation value, and playing corresponding videos to the user according to the video delivery list. In the application, the multi-mode fusion is carried out on the multi-dimensional characteristic information of the user, the multi-dimensional historical video information of the user is considered, the target fusion characteristic information is determined, and the target fusion characteristic information is subjected to preference evaluation according to the preference prediction model which is completed through pre-training, so that a corresponding video delivery list is generated, intelligent video delivery playing is realized, and the video delivery accuracy is improved.
Based on the first embodiment, the present application further provides another embodiment, and the intelligent delivery video playing method includes:
before the step S100, the step of acquiring the multi-dimensional feature information of the user, the method includes the following steps a100-a200:
step A100, acquiring a fusion feature sample and a user preference value label of the fusion feature sample;
in this embodiment, the historical video recording data of the user includes a fusion feature sample and a user preference value tag of the fusion feature sample, where the historical video recording data of the user is data in a time period preset before the user, and the preset time period may be in the past year or in the past six months, which is not specifically limited herein.
In this embodiment, the fused feature sample is a feature sample for training, and includes a certain amount of feature information.
In this embodiment, the user preference value label of the fused feature sample is a user preference value corresponding to the fused feature sample.
And step A200, performing iterative training on a preset first model to be trained based on the fusion feature sample and the user preference value label of the fusion feature sample to obtain a preference prediction model meeting the accuracy condition.
In this embodiment, the device performs iterative training on a preset first model to be trained based on the fused feature sample and the user preference value label of the fused feature sample to obtain a preference prediction model meeting the precision condition, where the model to be trained is a preset initial model with basic processing of the fused feature sample, and only has a difference in precision compared with the preference prediction model.
Specifically, the step A200 includes the following steps A210-A240:
step A210, inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference value;
in this embodiment, the device inputs the fused feature sample to a preset first model to be trained to obtain a predicted user preference value, where the predicted user preference value is obtained by performing prediction analysis on the model in training.
Step A220, performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
in this embodiment, the device performs difference calculation on the predicted user preference value and the user preference value label of the fused feature sample to obtain an error result, that is, verifies whether the result obtained by the model in training is consistent with the known result, and performs difference calculation between the results to obtain the error result.
Step A230, judging whether the error result meets an error standard indicated by a preset error threshold range or not based on the error result;
in this embodiment, since the result after model training and the actual result have errors, the error result is allowed to be within the preset error threshold range, so as to further determine whether the error result meets the error standard indicated by the preset error threshold range.
And step A240, if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference value, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a preference prediction model meeting the accuracy condition.
In this embodiment, if the error result does not meet the error standard indicated by the preset error threshold range, the model is indicated that the error is too large in this training, and the device returns to the step of inputting the fusion feature sample to the preset first model to be trained to obtain the predicted user preference value, until the error result meets the error standard indicated by the preset error threshold range, and then stops training, so as to obtain the preference prediction model with the precision condition, thereby improving the accuracy of model prediction.
Based on the first embodiment and the second embodiment, the present application further provides another embodiment, and the intelligent delivery video playing method includes:
before the step S100, the step of acquiring the multi-dimensional feature information of the user, the method includes the following steps B100-B200:
step B100, obtaining a multi-dimensional characteristic sample;
in this embodiment, the apparatus obtains a multi-dimensional feature sample, where the multi-dimensional feature sample is extracted from historical video recording data of a user, where the historical video recording data of the user is data within a time period preset before the user.
Step B200, based on the multi-dimensional characteristic sample, carrying out combined training on a preset second model to be trained and a preset first model to be trained to obtain a fusion model and a preference prediction model which meet the precision condition;
the second model to be trained is an initial training model of the fusion model, and the first model to be trained is an initial training model of the preference prediction model.
In this embodiment, the device performs joint training on a preset second model to be trained and a preset first model to be trained based on the multi-dimensional feature sample to obtain a fusion model and a preference prediction model with accuracy conditions, where the second model to be trained is an initial training model of the fusion model, and the first model to be trained is an initial training model of the preference prediction model. The combined training is to train the fusion model and the preference prediction model together, so that the prediction accuracy of the fusion characteristic information and the preference evaluation value of the user is higher, and the accuracy of video delivery is further improved.
Specifically, the step B200 includes the following steps B210-B250:
step B210, inputting the multi-dimensional feature sample into a preset second model to be trained to obtain prediction fusion feature information;
in this embodiment, the device inputs the multidimensional feature sample to a preset second model to be trained to obtain prediction fusion feature information, where the second model to be trained is a preset initial model with basic processing multidimensional feature sample, and only differences in precision exist between the prediction fusion feature information and the fusion model.
Step B220, inputting the prediction fusion characteristic information into a preset first model to be trained to obtain a predicted user preference value;
in this embodiment, referring to the above step a100, the description is omitted here.
Step B230, performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
step B240, based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
in this embodiment, referring to the above step a200, the description is omitted here.
And step B250, if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the multi-dimensional feature sample into a preset second model to be trained to obtain predicted fusion feature information, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a fusion model and a preference prediction model meeting the accuracy condition.
In this embodiment, if the error result does not meet the error standard indicated by the preset error threshold range, the device returns to the step of inputting the multi-dimensional feature sample to a preset second model to be trained to obtain predicted fusion feature information, and stops training until the error result meets the error standard indicated by the preset error threshold range, so as to obtain a fusion model and a preference prediction model with precision conditions, that is, preference evaluation is performed on the target fusion feature information according to the preference prediction model after the pre-training is completed, so as to generate a corresponding video delivery list, realize intelligent video delivery, and improve the accuracy of video delivery.
The present application also provides an intelligent delivery video playing device, referring to fig. 3, the intelligent delivery video playing device includes:
the acquisition module is used for acquiring the multidimensional characteristic information of the user;
the fusion module is used for carrying out multi-mode fusion processing on the multi-dimensional characteristic information based on a preset fusion model to obtain target fusion characteristic information;
the evaluation module is used for carrying out preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
And the playing module is used for determining a video playing list based on the user preference value and playing corresponding videos to the user according to the video playing list.
Optionally, the playing module includes:
the video acquisition module is used for acquiring a playable video set;
the computing module is used for carrying out preference evaluation computation on each video in the playable video set to obtain a video preference evaluation value of each video in the playable video set;
the comparison module is used for comparing the visual frequency deviation good evaluation value with the user preference evaluation value to obtain a comparison result;
and the selection module is used for selecting target videos with the video preference evaluation value larger than or equal to the user preference evaluation value in the comparison result, and forming a video delivery list based on the target videos.
Optionally, the intelligent video playing device further includes:
the sample acquisition module is used for acquiring a fusion characteristic sample and a user preference value label of the fusion characteristic sample;
and the training module is used for carrying out iterative training on a preset first model to be trained based on the fusion characteristic sample and the user preference evaluation value label of the fusion characteristic sample to obtain a preference prediction model meeting the precision condition.
Optionally, the training module includes:
the first prediction module is used for inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference evaluation value;
the first difference calculation module is used for carrying out difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
the first judging module is used for judging whether the error result meets an error standard indicated by a preset error threshold range or not based on the error result;
and the first iterative training module is used for returning to the step of inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference evaluation value if the error result does not meet the error standard indicated by the preset error threshold range, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a preference prediction model meeting the precision condition.
Optionally, the intelligent video playing device further includes:
the characteristic sample acquisition module is used for acquiring a multidimensional characteristic sample;
the combined training module is used for carrying out combined training on a preset second model to be trained and a preset first model to be trained based on the multidimensional characteristic sample to obtain a fusion model and a preference prediction model which meet the precision condition;
The second model to be trained is an initial training model of the fusion model, and the first model to be trained is an initial training model of the preference prediction model.
Optionally, the joint training module includes:
the second prediction module is used for inputting the multi-dimensional feature sample into a preset second model to be trained to obtain prediction fusion feature information;
the third prediction module is used for inputting the prediction fusion characteristic information into a preset first model to be trained to obtain a predicted user preference evaluation value;
the second difference calculation module is used for carrying out difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
the second judging module is used for judging whether the error result meets an error standard indicated by a preset error threshold range or not based on the error result;
and the second iterative training module is used for returning to the step of inputting the multi-dimensional characteristic sample into a preset second model to be trained to obtain predicted fusion characteristic information if the error result does not meet the error standard indicated by the preset error threshold range, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a fusion model and a preference prediction model meeting the precision condition.
The specific implementation of the intelligent video playing device is basically the same as the above embodiments of the intelligent video playing method, and will not be described herein.
Referring to fig. 1, fig. 1 is a schematic diagram of a terminal structure of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the smart delivery video playing device may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. The rectangular user interface may include a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
It will be appreciated by those skilled in the art that the smart delivery video playback device structure shown in FIG. 1 does not constitute a limitation of the smart delivery video playback device, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, and an intelligent delivery video play program may be included in the memory 1005 as one type of storage medium. The operating system is a program for managing and controlling hardware and software resources of the intelligent video playing device, and supports the operation of the intelligent video playing program and other software and/or programs. The network communication module is used for realizing communication among components in the memory 1005 and communication with other hardware and software in the intelligent delivery video playing system.
In the intelligent delivery video playing device shown in fig. 1, the processor 1001 is configured to execute an intelligent delivery video playing program stored in the memory 1005, to implement the steps of the intelligent delivery video playing method described in any one of the above.
The specific implementation of the intelligent video playing device is basically the same as the above embodiments of the intelligent video playing method, and will not be described herein.
The application also provides a storage medium, wherein the storage medium stores a program for realizing the intelligent video playing method, and the program for realizing the intelligent video playing method is executed by a processor to realize the intelligent video playing method as follows:
acquiring multi-dimensional characteristic information of a user;
based on a preset fusion model, performing multi-mode fusion processing on the multi-dimensional characteristic information to obtain target fusion characteristic information;
performing preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
and determining a video delivery list based on the user preference evaluation value, and playing corresponding videos to the user according to the video delivery list.
Optionally, the multi-dimensional feature information includes a viewing frequency, a viewing duration, a video type, and a viewing time point of the historical video.
Optionally, the step of determining a video delivery list based on the user preference value includes:
acquiring a playable video set;
performing preference evaluation calculation on each video in the playable video set to obtain a video preference evaluation value of each video in the playable video set;
Comparing the visual frequency deviation good evaluation value with the user preference evaluation value to obtain a comparison result;
and selecting target videos with the video preference evaluation value larger than or equal to the user preference evaluation value from the comparison result, and forming a video delivery list based on the target videos.
Optionally, before the step of acquiring the multi-dimensional feature information of the user, the method includes:
acquiring a fusion characteristic sample and a user preference value label of the fusion characteristic sample;
and performing iterative training on a preset first model to be trained based on the fusion feature sample and the user preference evaluation value label of the fusion feature sample to obtain a preference prediction model meeting the precision condition.
Optionally, the step of iteratively training a preset first model to be trained based on the fused feature sample and the user preference value label of the fused feature sample to obtain a preference prediction model meeting the accuracy condition includes:
inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference evaluation value;
performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
Based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
and if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference value, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a preference prediction model meeting the precision condition.
Optionally, before the step of acquiring the multi-dimensional feature information of the user, the method includes:
acquiring a multi-dimensional characteristic sample;
based on the multi-dimensional characteristic sample, carrying out combined training on a preset second model to be trained and a preset first model to be trained to obtain a fusion model and a preference prediction model which meet the precision condition;
the second model to be trained is an initial training model of the fusion model, and the first model to be trained is an initial training model of the preference prediction model.
Optionally, the step of performing joint training on the preset second model to be trained and the preset first model to be trained based on the multidimensional feature sample to obtain a fusion model and a preference prediction model which meet the precision condition includes:
Inputting the multi-dimensional feature sample into a second preset model to be trained to obtain prediction fusion feature information;
inputting the prediction fusion characteristic information into a preset first model to be trained to obtain a predicted user preference value;
performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
and if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the multidimensional feature sample into a preset second model to be trained to obtain predicted fusion feature information, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a fusion model and a preference prediction model which meet the precision condition.
The specific implementation manner of the storage medium of the present application is basically the same as the above embodiments of the intelligent video playing method, and will not be repeated here.
The application also provides a computer program product, comprising a computer program which, when executed by a processor, realizes the steps of the intelligent video playing method.
The specific implementation manner of the computer program product of the present application is basically the same as the above embodiments of the intelligent delivery video playing method, and will not be described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (10)
1. The intelligent video playing method is characterized by comprising the following steps of:
acquiring multi-dimensional characteristic information of a user;
based on a preset fusion model, performing multi-mode fusion processing on the multi-dimensional characteristic information to obtain target fusion characteristic information;
performing preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
and determining a video delivery list based on the user preference evaluation value, and playing corresponding videos to the user according to the video delivery list.
2. The intelligent delivery video playing method according to claim 1, wherein the multi-dimensional characteristic information comprises a viewing frequency, a viewing duration, a video type and a viewing time point of the historical video.
3. The intelligent delivery video playing method as claimed in claim 1, wherein the step of determining a video delivery list based on the user preference value comprises:
Acquiring a playable video set;
performing preference evaluation calculation on each video in the playable video set to obtain a video preference evaluation value of each video in the playable video set;
comparing the visual frequency deviation good evaluation value with the user preference evaluation value to obtain a comparison result;
and selecting target videos with the video preference evaluation value larger than or equal to the user preference evaluation value from the comparison result, and forming a video delivery list based on the target videos.
4. The intelligent delivery video playing method according to claim 1, wherein before the step of obtaining the multi-dimensional feature information of the user, the method comprises:
acquiring a fusion characteristic sample and a user preference value label of the fusion characteristic sample;
and performing iterative training on a preset first model to be trained based on the fusion feature sample and the user preference evaluation value label of the fusion feature sample to obtain a preference prediction model meeting the precision condition.
5. The intelligent video playing method according to claim 4, wherein the step of iteratively training a preset first model to be trained based on the fused feature sample and the user preference value label of the fused feature sample to obtain a preference prediction model meeting the accuracy condition comprises the steps of:
Inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference evaluation value;
performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
and if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the fusion characteristic sample into a preset first model to be trained to obtain a predicted user preference value, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a preference prediction model meeting the precision condition.
6. The intelligent delivery video playing method according to claim 1, wherein before the step of obtaining the multi-dimensional feature information of the user, the method comprises:
acquiring a multi-dimensional characteristic sample;
based on the multi-dimensional characteristic sample, carrying out combined training on a preset second model to be trained and a preset first model to be trained to obtain a fusion model and a preference prediction model which meet the precision condition;
The second model to be trained is an initial training model of the fusion model, and the first model to be trained is an initial training model of the preference prediction model.
7. The intelligent video playing method according to claim 1, wherein the step of performing joint training on a preset second model to be trained and a preset first model to be trained based on the multidimensional feature sample to obtain a fusion model and a preference prediction model which meet accuracy conditions comprises the following steps:
inputting the multi-dimensional feature sample into a second preset model to be trained to obtain prediction fusion feature information;
inputting the prediction fusion characteristic information into a preset first model to be trained to obtain a predicted user preference value;
performing difference calculation on the predicted user preference value and the user preference value label of the fusion characteristic sample to obtain an error result;
based on the error result, judging whether the error result meets an error standard indicated by a preset error threshold range;
and if the error result does not meet the error standard indicated by the preset error threshold range, returning to the step of inputting the multidimensional feature sample into a preset second model to be trained to obtain predicted fusion feature information, and stopping training until the error result meets the error standard indicated by the preset error threshold range to obtain a fusion model and a preference prediction model which meet the precision condition.
8. An intelligent delivery video playing device, which is characterized in that the intelligent delivery video playing device comprises:
the acquisition module is used for acquiring the multidimensional characteristic information of the user;
the fusion module is used for carrying out multi-mode fusion processing on the multi-dimensional characteristic information based on a preset fusion model to obtain target fusion characteristic information;
the evaluation module is used for carrying out preference evaluation on the target fusion characteristic information based on a preset preference prediction model to obtain a user preference evaluation value;
and the playing module is used for determining a video playing list based on the user preference value and playing corresponding videos to the user according to the video playing list.
9. An intelligent delivery video playing device, characterized in that the intelligent delivery video playing device comprises: a memory, a processor and a program stored on the memory for implementing the intelligent delivery video playing method,
the memory is used for storing a program for realizing the intelligent video playing method;
the processor is configured to execute a program for implementing the intelligent delivery video playing method, so as to implement the steps of the intelligent delivery video playing method as claimed in any one of claims 1 to 7.
10. A storage medium, wherein a program for implementing the intelligent delivery video playing method is stored on the storage medium, and the program for implementing the intelligent delivery video playing method is executed by a processor to implement the steps of the intelligent delivery video playing method as set forth in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311013762.0A CN117156184B (en) | 2023-08-11 | 2023-08-11 | Intelligent video playing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311013762.0A CN117156184B (en) | 2023-08-11 | 2023-08-11 | Intelligent video playing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117156184A true CN117156184A (en) | 2023-12-01 |
CN117156184B CN117156184B (en) | 2024-05-17 |
Family
ID=88911048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311013762.0A Active CN117156184B (en) | 2023-08-11 | 2023-08-11 | Intelligent video playing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117156184B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200288205A1 (en) * | 2019-05-27 | 2020-09-10 | Beijing Dajia Internet Information Technology Co., Ltd. | Method, apparatus, electronic device, and storage medium for recommending multimedia resource |
CN112287167A (en) * | 2020-10-29 | 2021-01-29 | 四川长虹电器股份有限公司 | Video recommendation recall method and device |
CN114491150A (en) * | 2022-03-28 | 2022-05-13 | 苏州浪潮智能科技有限公司 | Video recommendation method, system, device and computer readable storage medium |
CN115203471A (en) * | 2022-09-15 | 2022-10-18 | 山东宝盛鑫信息科技有限公司 | Attention mechanism-based multimode fusion video recommendation method |
CN115439770A (en) * | 2021-06-04 | 2022-12-06 | 腾讯科技(深圳)有限公司 | Content recall method, device, equipment and storage medium |
CN115510313A (en) * | 2021-06-23 | 2022-12-23 | 腾讯科技(深圳)有限公司 | Information recommendation method and device, storage medium and computer equipment |
CN115544299A (en) * | 2022-10-17 | 2022-12-30 | 上海幻电信息科技有限公司 | Video recommendation method and device |
CN115618054A (en) * | 2022-10-20 | 2023-01-17 | 上海幻电信息科技有限公司 | Video recommendation method and device |
CN116186326A (en) * | 2022-12-30 | 2023-05-30 | 微梦创科网络科技(中国)有限公司 | Video recommendation method, model training method, electronic device and storage medium |
-
2023
- 2023-08-11 CN CN202311013762.0A patent/CN117156184B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200288205A1 (en) * | 2019-05-27 | 2020-09-10 | Beijing Dajia Internet Information Technology Co., Ltd. | Method, apparatus, electronic device, and storage medium for recommending multimedia resource |
CN112287167A (en) * | 2020-10-29 | 2021-01-29 | 四川长虹电器股份有限公司 | Video recommendation recall method and device |
CN115439770A (en) * | 2021-06-04 | 2022-12-06 | 腾讯科技(深圳)有限公司 | Content recall method, device, equipment and storage medium |
CN115510313A (en) * | 2021-06-23 | 2022-12-23 | 腾讯科技(深圳)有限公司 | Information recommendation method and device, storage medium and computer equipment |
CN114491150A (en) * | 2022-03-28 | 2022-05-13 | 苏州浪潮智能科技有限公司 | Video recommendation method, system, device and computer readable storage medium |
CN115203471A (en) * | 2022-09-15 | 2022-10-18 | 山东宝盛鑫信息科技有限公司 | Attention mechanism-based multimode fusion video recommendation method |
CN115544299A (en) * | 2022-10-17 | 2022-12-30 | 上海幻电信息科技有限公司 | Video recommendation method and device |
CN115618054A (en) * | 2022-10-20 | 2023-01-17 | 上海幻电信息科技有限公司 | Video recommendation method and device |
CN116186326A (en) * | 2022-12-30 | 2023-05-30 | 微梦创科网络科技(中国)有限公司 | Video recommendation method, model training method, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117156184B (en) | 2024-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107888981B (en) | Audio and video preloading method, device, equipment and storage medium | |
CN110209843B (en) | Multimedia resource playing method, device, equipment and storage medium | |
US11622141B2 (en) | Method and apparatus for recommending live streaming room | |
CN109740068B (en) | Media data recommendation method, device and storage medium | |
CN108304441B (en) | Network resource recommendation method and device, electronic equipment, server and storage medium | |
CN111341286B (en) | Screen display control method and device, storage medium and terminal | |
WO2020151547A1 (en) | Interaction control method for display page, and device | |
CN111258435B (en) | Comment method and device for multimedia resources, electronic equipment and storage medium | |
US20170169040A1 (en) | Method and electronic device for recommending video | |
CN109754316B (en) | Product recommendation method, product recommendation system and storage medium | |
CN113609392B (en) | Content recommendation method, content to be recommended determining method and related device | |
CN113962965B (en) | Image quality evaluation method, device, equipment and storage medium | |
CN112004117B (en) | Video playing method and device | |
CN113727169A (en) | Video playing method, device, equipment and storage medium | |
CN115145801B (en) | A/B test flow distribution method, device, equipment and storage medium | |
CN115834959B (en) | Video recommendation information determining method and device, electronic equipment and medium | |
CN112672208A (en) | Video playing method, device, electronic equipment, server and system | |
CN109040775A (en) | Video correlating method, device and computer readable storage medium | |
CN113656637B (en) | Video recommendation method and device, electronic equipment and storage medium | |
CN113821145A (en) | Page processing method, device and medium | |
CN117156184B (en) | Intelligent video playing method, device, equipment and storage medium | |
CN112256976B (en) | Matching method and related device | |
CN112650872A (en) | Dynamic picture playing method, device and equipment and computer readable storage medium | |
CN114398514A (en) | Video display method and device and electronic equipment | |
CN114969493A (en) | Content recommendation method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |