Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the rapid development of the internet technology, people increasingly rely on obtaining various information from the internet, wherein short videos are deeply favored by users due to short duration and rich content, but the number of the short videos is also increased rapidly as people sharing the short videos on the network are more and more, so that how to better recommend the short videos to the users becomes a popular research problem.
The inventor finds that by collecting the watching records of the user on the plurality of videos, the information such as the video watching habits of the user, the videos interesting to the user and the like can be obtained, the information is used as the basis for recommending the videos interesting to the user or the videos concerned by the user, and personalized video recommendation can be provided for the user. However, the video recommended to the user in the video recommendation mode has a single recommendation result, and the recommended video contents are not very different, so that the user may not receive the video with the novel content for a long time, the requirement of the user on the diversity of the recommended video cannot be met, and the user experience is poor.
The inventor finds that in actual research, a certain rule is formulated to break up the video push queue, for example, the original video recommendation result is broken up by means of manual labeling, rule formulation and the like, and the video recommendation result with certain diversity can be obtained again.
However, after the video recommendation queue is broken up through the established rule, the obtained video recommendation result is strongly coupled with the established rule, and dynamic diversity breaking cannot be realized, so that the diversity of the video recommendation result is limited to a certain extent, and the diversity requirement of a user on the video recommendation result cannot be met.
Therefore, in view of the above problems, the inventor proposes a video processing method, an apparatus, a server and a storage medium in the embodiments of the present application, which can select a preset number of target videos from a plurality of videos through similarity among the plurality of videos, a score value of each of the plurality of videos and a predetermined algorithm. The similarity of every two target videos in the preset number of target videos is greater than the similarity of any two videos in the remaining video set, wherein the remaining video set is composed of videos except the target videos in the plurality of videos, the larger the similarity between the two videos is through screening by the preset algorithm, the larger the corresponding Euclidean distance is, the larger the difference between the two videos is, so that the larger the difference between the target videos can be ensured, the target video set is composed of the preset number of target videos, and the target video set is output to a user, so that the diversity of video recommendation results is realized, and the user experience is improved.
An application environment of the video processing method provided by the embodiment of the present application is described below.
Referring to fig. 1, fig. 1 shows a schematic diagram of an application environment of a video processing method, where the application environment may include a server 1, a video library 2 and a client 3, where the server 1 may be communicatively connected to the video library 2 and the client 3, respectively. The video library 2 includes a large amount of videos, and can be used to provide video resources for the server 1. The server 1 may be configured to process, for example, filter, sort, and so on, video resources provided by the video library 2, and then transmit the processed video to the user terminal 3. The user terminal 3 can receive and play the processed video transmitted by the server 1. Alternatively, the user terminal 3 may be an electronic device having a communication function, a display function, and an audio playing function, such as a smart phone, a personal computer, a smart projection device, and so on.
Alternatively, the video library 2 may be configured in the server 1, and when the server 1 needs to acquire a video, the video library may be directly called from the local of the server 1, so that the acquisition time may be saved. The video library 2 may also be configured in the cloud server 1 in communication with the server 1, and when the server 1 needs to obtain a video, it may send an obtaining request to the cloud server 1 to obtain the video, so as to relieve the storage pressure of the server 1.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video processing method according to an embodiment of the present application.
The video processing method comprises the following steps:
s110, obtaining the score value of each video in the plurality of videos and the similarity between every two videos in the plurality of videos.
It should be noted that, if the videos are regarded as a video set, the score value of one video (e.g., video a) in the video set may refer to the similarity between video a and the video set. It is understood that when calculating the similarity between the video a and the video set, the similarity between the video a and each video in the video set may be compared to obtain a plurality of similarities, and then the similarity between the video a and the video set is obtained according to the plurality of similarities, as an example, the similarity between the video a and the video set is greater when the sum of the values of the plurality of similarities is greater.
In some embodiments, the server 1 may obtain a video queue from the video library 2, where the video queue may include a plurality of videos, and each video in the plurality of videos may be labeled in advance according to an arrangement attribute of the video queue, that is, a plurality of video sequence numbers are obtained, and each video sequence number corresponds to one video.
As an implementation manner, when obtaining the score value of each video in the plurality of videos, the score value corresponding to the video sequence number may be found from a pre-made score relation table according to the video sequence number, where the score relation table may be obtained by scoring each video according to a certain rule, that is, the similarity between each video and the video queue is calculated according to a specified similarity algorithm and the characteristic parameters of the video, where the similarity algorithm may be an euclidean distance algorithm. Optionally, the feature parameters may be one-dimensional or multi-dimensional, and optionally, the feature parameters of the video to be scored and the feature parameters of each video in the video queue may also be input into a pre-trained scoring model, so as to obtain a score value for the video output by the machine learning model. Alternatively, the scoring model may be a neural network module.
It should be noted that, when a plurality of videos in one video queue are scored, the same set of scoring rules needs to be adopted.
As an embodiment, when obtaining the similarity between each two videos in the multiple videos, one or more feature parameters of each video may be obtained first, for example, the playing amount, the marked type, the resolution, and the like of the video may be used as the feature parameters of the video, and these feature parameters may be obtained from records of various platforms on the network or from an attribute table of the video. And then calculating the characteristic parameters of the two videos by a similarity algorithm, so as to obtain the similarity between the two videos. It should be noted that, when the feature parameter calculation is performed on a plurality of videos in the video queue, the selected feature parameter is the same as the similarity calculation method. Optionally, the feature parameters of the two videos may be input into a similarity generating model trained in advance, so as to obtain the similarity between the two videos output by the similarity generating model. Wherein, the similarity generating model can be a neural network module.
It should be noted that the feature parameter and similarity calculation method used in obtaining the score values of the videos is the same as the feature parameter and similarity calculation method used in obtaining the similarity between each two videos in the plurality of videos.
And S120, selecting a preset number of target videos from the plurality of videos based on the score values, the similarity and a preset algorithm, wherein the similarity of every two target videos in the preset number of target videos is greater than the similarity of any two videos in a residual video set, and the residual video set is composed of videos, except the target videos, in the plurality of videos.
The preset algorithm is used for minimizing the similarity among the selected videos, so that the preset number of target videos selected from the videos have larger difference.
In some embodiments, the video sequence corresponding to the video queue may be input into a preset algorithm, and the target video is selected by the preset algorithm according to the score value and the similarity.
As an example, the preset algorithm may select a maximum boundary correlation (MMR) algorithm. Specifically, the MMR algorithm used in this embodiment may be as follows:
max[λ*score(i)-(1-λ)*max[simi(i,j)]];
where score (i) represents a score value of the ith video in the input sequence, simi (i, j) represents a similarity of the ith video and the jth video in the input sequence, and λ is a parameter for adjusting weights of the score value and the similarity. Wherein, each round of input needs one iteration loop calculation.
It can be understood that, since the second term (1- λ) × max [ sami (i, j) ] in the MMR algorithm is preceded by a minus sign, when a video with the greatest similarity to the video with the sequence number i is selected by the MMR, the difference between the selected video and the video with the sequence number i is also greatest. It is understood that, when λ is larger, the difference between the selected video and the video with sequence number i is larger.
As shown in fig. 3, as an example, in practical applications, the video queue obtained in S110 may be represented as a video set C, and the video is input into the MMR algorithm in combination with C to output a video set R.
Specifically, the MMR algorithm screening may include the following steps:
step 1, inserting a first video in a video set C into a video set R, and deleting the first video in the video set C in combination with the video.
And 2, traversing each video in the current video set C, calculating a video with the largest similarity score with the video set R in the current video set C by using an MMR algorithm, and marking the video as maxRC.
And 3, inserting the video maxRC into the video set R, and deleting the video maxRC in the video set.
And 4, judging whether the number of the videos in the video set is larger than N.
And 5, when the number of the videos in the video set is not more than N, returning to the step 2 and re-executing the step 2 to the step 4.
And 6, stopping adding the video into the video set R and outputting the video set R when the number of the videos in the video set is larger than N. At this time, the videos in the video set R are target videos, and N is a preset number, where N is a positive integer.
It can be understood that in each pair of iteration and MMR calculation for the video set C, the extracted video maxRC is the video with the greatest similarity to the current video set R, that is, the video maxRC is the one that is least similar to all videos in the current video set R, and then the video maxRC is added to the video set R, and the next round of iteration is continued. Because the video added into the video set R each time is the video which is least similar to each video in the current video set R, N videos with the maximum pairwise similarity, namely the video set R, are selected from the video set C.
And S130, obtaining a target video set based on the preset number of target videos, and outputting the target video set.
As a manner, the preset number of target videos obtained in S120 may be arranged in a certain order to form a target video set, and then the server 1 outputs the target video set, transmits the target video set to the user terminal 3 through the network, and finally presents the target video set on the display interface of the user terminal 3.
In this embodiment, when video recommendation is performed, a score value of each video in a plurality of videos and a similarity between every two videos in the plurality of videos are obtained, then a preset number of target videos are selected from the plurality of videos based on the score values, the similarity and a preset algorithm, the similarity between every two target videos in the preset number of target videos is greater than the similarity between any two videos in a remaining video set, wherein the remaining video set is composed of videos, except the target videos, in the plurality of videos, so that a large difference between the plurality of target videos is ensured, and finally the two videos obtain the target video set based on the preset number of target videos and output the target video set. Therefore, the problem that recommended videos are repeated and single can be solved, the target videos with diversity are recommended to the user in a set mode, and user experience is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a video processing method according to another embodiment of the present application, where the method may include:
s210, obtaining the score value of each video in the plurality of videos and the similarity between every two videos in the plurality of videos.
As shown in fig. 5, in some embodiments, S210 may include:
S211A, obtaining the playing amount corresponding to each video in the plurality of videos in the preset time length.
As an example, the server 1 may call play amount data of each video in the video queue within a preset time duration (e.g., a month, a week, a day, etc.) from one or more video playing platforms.
S212A, determining a score value of each video based on the corresponding playing amount of each video.
In some embodiments, the amount of video played may be the only scoring item. When the playing amount of each video is small, the score value of the video may be calculated directly by using the playing amount of the video, for example, when the score value of the video to be scored is calculated, the similarity between the video to be scored and each video in the video queue may be calculated according to the playing amount of each video and a similarity calculation method (such as euclidean distance, cosine similarity, and the like), so as to obtain a plurality of similarities, wherein the closer the playing amount of two videos is, the greater the similarity between the two videos is. And then calculating the similarity between the video to be scored and the video queue according to the similarity between the video to be scored and each video in the video queue, optionally, determining the similarity between the video to be scored and the video queue according to the sum or average value of a plurality of similarities, and determining the obtained similarity between the video to be scored and the video queue as the score value of the video to be scored.
Because the playing amount of the video can accurately reflect the requirement of the user on the video, in the embodiment, the playing amount corresponding to each video in the plurality of videos within the preset time is obtained, and the score value of each video is determined based on the playing amount corresponding to each video, so that the target video can be obtained by using the score value as a screening basis, the requirement of the user is better met, and the user experience is effectively improved.
As shown in fig. 6, in other embodiments, S210 may include:
S211B, obtaining attribute parameters corresponding to each of the plurality of videos, where the attribute parameters include at least one of resolution, frame rate, and video format.
In some embodiments, the server 1 may obtain the attribute parameters directly from the attribute list of the video. The attribute list records various attribute parameters of the video in advance, such as video size, resolution, frame rate, video format, video duration and the like.
S212B, determining a score value of each video based on the attribute parameter corresponding to each video.
In some embodiments, attribute parameters such as resolution, frame rate, video format, etc. may be used as the scoring items, that is, the similarity between each video in the video queue and the video queue is determined by the resolution, frame rate, and video format. As a mode, the resolution alone can be used as a scoring item, taking a video to be tested in a video queue as an example, the resolution of the video to be tested can be respectively compared with the resolution of each video in the video queue, and the closer the resolutions are, the higher the similarity is, so that the similarity between the video to be tested and each video in the video queue can be calculated to obtain a plurality of similarities, then the sum of the similarities can be used as the scoring value of the video to be tested, and optionally, the average value of the similarities can also be used as the scoring value of the video to be scored. And in the same way, each video in the video queue is sequentially used as the video to be scored, and the scoring value of each video in the video queue can be obtained. Similarly, the frame rate and the video format may be used as scoring items according to the above method, and the scoring value of each video may be scored. For example, the closer the frame rate of the video to be scored is to the frame rate of the video in the video queue, the higher the score value of the video to be scored is. For another example, the video format of the video to be scored is rmvb, and if the number of the videos with the video format rmvb in the video queue is larger, the similarity between the video to be scored and the video queue is higher, and the score value of the video to be scored is higher. Optionally, the score value of each video may be determined simultaneously according to the resolution, the frame rate, and the video format, that is, the video is scored, and when scoring is performed, the scoring weight occupied by each of the resolution, the frame rate, and the video format may be set according to the actual requirement.
The quality of the video can be accurately reflected by considering the attribute parameters of the video. In the embodiment, by acquiring the attribute parameter corresponding to each video in the plurality of videos and determining the score value of each video based on the attribute parameter corresponding to each video, it can be ensured that the plurality of target videos obtained by using the score value as a screening basis can have more real differences.
In other embodiments, the score value of the video may be obtained by simultaneously combining the playing amount of the video and the attribute parameter of the video, that is, both the playing amount of the video and the attribute parameter are used as scoring items according to the score value. And calculating the similarity between each video in the video queue and the video queue by the play amount and the attribute parameters, and then taking the similarity as a score value to obtain the score value of each video in the video queue.
Optionally, the manner of obtaining the credit value of the video may further include a video type, a video attention amount, a praise amount, and the like, in addition to the playing amount and the attribute parameters of the video.
S220, selecting one video from the plurality of videos as a first video, and constructing an initial video set based on the first video.
In some embodiments, the plurality of videos obtained by the current server 1 may be sorted to obtain a sorted video queue, and then one video is selected from the video queue as the first video, optionally, the first video in the video queue may be used as the first video, and also one video may be arbitrarily selected from the video queue as the first video. The first may then be added to an initially empty video set to form an initial video set, and the first video in the video queue deleted.
And S230, determining a second video from the residual video set based on a preset algorithm, the score value corresponding to the first video and the similarity between the first video and each video in the residual video set, wherein the residual video set is composed of videos in the plurality of videos except the videos included in the initial video set.
As an example, max [ λ score (i) - (1- λ) max [ sim (i, j) ] ] may be used as a preset algorithm, the first video is used as i in the preset algorithm, the score value score (i) corresponding to the first video is input into the preset algorithm, then the maximum similarity max [ sim (i, j) ] is obtained through comparing the similarity of each video in the first video and the remaining video set, and then the maximum similarity obtained through comparison is also input into the preset algorithm, so that the second video with the maximum difference from the first video is output through the preset algorithm.
And S240, adding the second video into the initial video set.
Wherein when the second video is added to the initial video set, the second video needs to be deleted from the video queue.
And S250, repeatedly executing the steps of determining a second video from the residual video set based on a preset algorithm, the score value corresponding to the first video and the similarity between the first video and each video in the residual video set, and adding the second video to the initial video set until the number of the videos in the initial video set reaches a preset number.
Wherein the preset number is smaller than the initial number of videos in the video queue.
As an example, video a, video b, video c, and video d are included in the video queue, for example. The preset number is 3. In specific implementation, firstly, selecting a video a from the video queue as a first video, constructing an initial video set by using the video a, wherein only the video a exists in the initial video set at the moment, and deleting the video a in the video queue. And then selecting the video c with the maximum similarity to the current initial video set from the current video queue through a preset algorithm, namely the similarity between the video c and the video a is greater than the similarity between the video a and the video b and the similarity between the video a and the video d. Since the preset algorithm is used for minimizing the similarity between the selected videos, when the video c with the greatest similarity to the video a is selected through the preset algorithm, the difference between the video c and the video a is larger than that between the video b and the video d. Video c is then added to the current initial video set and then deleted from the current video queue. At this time, the video queue only includes video b and video d, and the initial video set only includes video a and video c. And then selecting a video with the maximum similarity to the initial video set from the videos b and d, and if the video is the video b, adding the video b into the initial video set, wherein the initial video set comprises the videos a, b and c, and the number of the videos meets 3, so that the addition of the videos into the initial video set can be stopped.
And S260, taking the preset number of videos in the initial video set as target videos. The similarity of every two target videos in the preset number of target videos is greater than the similarity of any two videos in the residual video set, wherein the residual video set is composed of videos, except the target videos, in the multiple videos.
As an example, video a, video b, and video c may be all target videos, and since each added video is the most different from the current initial video set, the diversity of the videos in the initial video set is guaranteed.
And S270, obtaining a target video set based on a preset number of target videos, and outputting the target video set.
It is understood that the final initial video set is a target video set, for example, the target video set also includes video a, video b, and video c.
As shown in fig. 7, in some embodiments, an embodiment of S270 may include:
s271, sequencing a preset number of target videos according to the sequence of adding each target video to the initial video set to obtain the sequenced target videos.
As an example, in the initial video set, for example, the adding order of the videos is video a, video b, and video c in sequence, so that the target videos after being sorted are also video a, video b, and video c.
And S272, obtaining a target video set based on the sorted target videos.
As an example, the videos in the target video set are also ordered as video a, video b, and video c.
Considering that the difference between the video preferentially added to the initial video set and the initial video set is the largest, and the difference between the video added to the initial video set each time and the initial video set is smaller than the difference corresponding to the video added last time, in the embodiment, a preset number of target videos are sorted according to the sequence of adding each target video to the initial video set to obtain the sorted target videos, and the target video set is obtained based on the sorted target videos, so that when the target video set is played by a user, the video with larger difference can be preferentially watched by the user, the requirement of the user on video diversity is further met, and the user experience is improved.
In this embodiment, one video is selected from a plurality of videos to serve as a first video, an initial video set is constructed based on the first video, a second video is determined from the remaining video set based on a preset algorithm, a score value corresponding to the first video and a similarity between the first video and each video in the remaining video set, the second video is added to the initial video set, and the steps are repeatedly executed until the number of videos in the initial video set reaches a preset number, so that a target video can be simply, conveniently and effectively selected and a target video set is formed.
Referring to fig. 8, fig. 8 is a flowchart illustrating a video processing method according to another embodiment of the present application, where the method may include:
s310, obtaining the score value of each video in the plurality of videos and the similarity between every two videos in the plurality of videos.
S320, selecting one video from the plurality of videos as a first video, and constructing an initial video set based on the first video.
The embodiments of S310-S320 refer to S210-S230, and are not described herein.
S330, acquiring the maximum similarity in the similarities between the first video and each video in the remaining video set.
In some embodiments, the similarity between the first video and each video may be calculated by a euclidean distance algorithm to obtain a plurality of similarities, and each similarity corresponds to one video, and then the largest similarity is selected from the plurality of similarities.
S340, judging whether the score value and the maximum similarity corresponding to the first video meet preset conditions or not based on a preset algorithm.
In some embodiments, the S340 may be implemented by determining whether the score value and the maximum similarity corresponding to the first video satisfy a preset condition based on max [ λ × score (i) - (1- λ) × max [ sim (i, j) ] ], where score (i) is the score value of the first video, and sim (i, j) is the similarity between the first video and the jth video in the remaining video set; λ is a parameter for adjusting the weights of score (i) and simi (i, j).
As an example, assuming that after traversing each λ × score (i) - (1- λ) × max [ sim (i, j) ], λ × score (i) - (1- λ) × max [ sim (i,3) ] is determined to be the largest, it may be determined that the 3 rd video in the remaining video set satisfies the preset condition.
In the present embodiment, by determining whether the score value and the maximum similarity corresponding to the first video satisfy the preset conditions based on max [ λ score (i) - (1- λ) max [ semi (i, j) ] ], it is possible to ensure a large difference between the video satisfying the preset conditions and the first video.
And S350, when the score value and the maximum similarity corresponding to the first video meet preset conditions, determining the video corresponding to the maximum similarity as a second video.
As an example, the 3 rd video in the set of remaining videos may be determined as the second video.
And S360, adding the second video into the initial video set.
And S370, repeatedly executing S330-S360 until the number of videos in the initial video set reaches a preset number.
And S380, taking the preset number of videos in the initial video set as target videos.
The similarity of every two target videos in the preset number of target videos is greater than the similarity of any two videos in the residual video set, wherein the residual video set is composed of videos, except the target videos, in the multiple videos.
And S390, obtaining a target video set based on the preset number of target videos, and outputting the target video set.
The specific implementation of S360-S390 can refer to S240-S270, and therefore is not described herein.
In this embodiment, by determining whether each video and the first video in the remaining video set satisfy the preset condition, the second video having a larger difference from the first video can be quickly and accurately selected.
Referring to fig. 9, a video processing apparatus 400 according to an embodiment of the present application is shown, where the video processing apparatus 400 includes: a score value and similarity obtaining module 410, a target video selecting module 420 and an output module 430.
The score value and similarity obtaining module 410 is configured to obtain a score value of each of the plurality of videos and a similarity between every two videos in the plurality of videos.
The target video selecting module 420 is configured to select a preset number of target videos from the multiple videos based on the score values, the similarity and a preset algorithm, where the similarity between every two target videos in the preset number of target videos is greater than the similarity between any two videos in a remaining video set, where the remaining video set is composed of videos of the multiple videos except the target videos.
And an output module 430, configured to obtain a target video set based on a preset number of target videos, and output the target video set.
Further, the target video selecting module 420 includes:
and the initial video set constructing unit is used for selecting one video from the plurality of videos as a first video and constructing an initial video set based on the first video.
And the second video determining unit is used for determining a second video from the residual video set based on a preset algorithm, the score value corresponding to the first video and the similarity between the first video and each video in the residual video set, wherein the residual video set is composed of videos in the plurality of videos except the videos included in the initial video set.
And the adding unit is used for adding the second video into the initial video set.
And the repeated execution unit is used for repeatedly executing the steps of determining the second video from the residual video set and adding the second video to the initial video set based on the preset algorithm, the score value corresponding to the first video and the similarity between the first video and each video in the residual video set until the number of the videos in the initial video set reaches the preset number.
And the target video determining unit is used for taking the preset number of videos in the initial video set as the target videos.
Further, the output module 430 includes:
and the sequencing unit is used for sequencing the preset number of target videos according to the sequence of adding each target video to the initial video set to obtain the sequenced target videos.
And the target video set generating unit is used for obtaining a target video set based on the sorted target videos.
Further, the second video determination unit includes:
and the maximum similarity acquiring subunit is used for acquiring the maximum similarity in the similarities between the first video and each video in the remaining video set.
And the judging subunit is used for judging whether the score value and the maximum similarity corresponding to the first video meet the preset conditions or not based on a preset algorithm.
And the second video determining subunit determines the video corresponding to the maximum similarity as the second video when the score value and the maximum similarity corresponding to the first video meet preset conditions.
Further, the determining subunit is specifically configured to determine whether the score value and the maximum similarity corresponding to the first video satisfy a preset condition based on max [ λ × score (i) - (1- λ) × max [ sami (i, j) ], where score (i) is the score value of the first video, and sami (i, j) is the similarity between the first video and the jth video in the remaining video set; λ is a parameter for adjusting the weights of score (i) and simi (i, j).
Further, the score value and similarity obtaining module 410 includes:
the first obtaining unit is used for obtaining the playing amount corresponding to each video in the plurality of videos within the preset time length.
And the first scoring value determining unit is used for determining the scoring value of each video based on the playing amount corresponding to each video.
Further, the score value and similarity obtaining module 410 includes:
and the second acquisition unit is used for acquiring the attribute parameters corresponding to each video in the plurality of videos, wherein the attribute parameters comprise at least one of resolution, frame rate and video format.
And the second scoring value determining unit is used for determining the scoring value of each video based on the attribute parameter corresponding to each video.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 10, a block diagram of a server according to an embodiment of the present application is shown. The server 500 may be the server 500 capable of running the program in the foregoing embodiment. The server 500 in the present application may include one or more of the following components: a processor 510, a memory 520, and one or more programs, wherein the one or more programs may be stored in the memory 520 and configured to be executed by the one or more processors 510, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 510 may include one or more processing cores. The processor 510, using various interfaces and lines to connect various parts throughout the server, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 520 and invoking data stored in the memory 520. Alternatively, the processor 510 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 510 may integrate one or a combination of a Central Processing Unit (CPU) 510, a Graphics Processing Unit (GPU) 510, a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 510, but may be implemented by a communication chip.
The Memory 520 may include a Random Access Memory (RAM) 520 and may also include a Read-Only Memory (Read-Only Memory) 520. The memory 520 may be used to store instructions, programs, code sets, or instruction sets. The memory 520 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal in use, such as a phonebook, audio-video data, chat log data, and the like.
Referring to fig. 11, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 600 has stored therein a program code 610, the program code 610 being capable of being invoked by a processor to perform the method described in the method embodiments above.
The computer-readable storage medium 600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium includes a non-transitory computer-readable storage medium. The computer readable storage medium has a storage space for program code for performing any of the method steps of the above-described method. The program code can be read from or written to one or more computer program products. The program code may be compressed, for example, in a suitable form.
To sum up, according to the video processing method, the apparatus, the server, and the storage medium provided in the embodiments of the present application, when video recommendation is performed, a score value of each video in a plurality of videos and a similarity between every two videos in the plurality of videos are obtained, and then a preset number of target videos are selected from the plurality of videos based on the score value, the similarity between every two target videos in the preset number of target videos is greater than a similarity between any two videos in a remaining video set, where the remaining video set is composed of videos other than the target videos in the plurality of videos, and since the diversity in the video recommendation result is closely associated with the similarity between the videos, it is ensured that a large difference exists between the plurality of target videos, and the two videos finally obtain the target video set based on the preset number of target videos, and outputs the target video set. Therefore, the problem that recommended videos are repeated and single can be solved, the target videos with diversity are recommended to the user in a set mode, and user experience is improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.