CN111050194B - Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium - Google Patents
Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111050194B CN111050194B CN201911214119.8A CN201911214119A CN111050194B CN 111050194 B CN111050194 B CN 111050194B CN 201911214119 A CN201911214119 A CN 201911214119A CN 111050194 B CN111050194 B CN 111050194B
- Authority
- CN
- China
- Prior art keywords
- video
- video sequence
- emotion
- information
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 36
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 230000008451 emotion Effects 0.000 claims abstract description 181
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000012512 characterization method Methods 0.000 claims description 99
- 238000003780 insertion Methods 0.000 claims description 29
- 230000037431 insertion Effects 0.000 claims description 29
- 238000004891 communication Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 abstract description 6
- 230000008569 process Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000002996 emotional effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides a video sequence processing method, a video sequence processing device, electronic equipment and a computer readable storage medium, wherein the video sequence processing method comprises the following steps: acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1; interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude; outputting the second video sequence. The method and the device can reduce the contrast between the promotion information and the video, improve the matching degree between the promotion information and the video, and further improve the output effect of the video sequence.
Description
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video sequence processing method, a video sequence processing apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, the internet is entering the video age. With the increasing rise of short video information streams, internet promotion information is also increasingly combined with short video information streams. At present, the promotion information is generally interleaved in the short video information stream in a random interleaving manner. The random insertion mode can cause the promotion information and the short video information flow to have larger contrast, so that the matching degree between the promotion information and the short video information flow is lower, and the output effect of the short video information flow is poorer.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a video sequence processing method, a video sequence processing apparatus, an electronic device, and a computer-readable storage medium, so as to solve the problem that the output effect of a short video information stream is poor due to a low matching degree between promotion information and a short video information stream caused by random insertion of the promotion information in the short video information stream. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a method for processing a video sequence, the method including:
acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
outputting the second video sequence.
In a second aspect of the present invention, there is also provided a video sequence processing apparatus, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first video sequence, the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
the interleaving module is used for interleaving the target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
and the output module is used for outputting the second video sequence.
In a third aspect of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
the processor is configured to implement the method steps described in the first aspect of the embodiment of the present invention when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute any of the above-described video sequence processing methods.
In yet another aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the video sequence processing methods described above.
According to the video sequence processing method, the video sequence processing device, the electronic equipment and the computer readable storage medium, the popularization information is inserted into the video sequence according to the emotion representation parameters of the video and the popularization information. Because the variation amplitude between the emotion characterization parameter of the promotion information and the emotion characterization parameter of the adjacent video is smaller than or equal to the preset amplitude, the contrast between the promotion information and the video can be reduced, the matching degree between the promotion information and the video is improved, and the output effect of the video sequence is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart of a video sequence processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another video sequence processing method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a video sequence processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Referring to fig. 1, fig. 1 is a video sequence processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
In this embodiment, the number of videos in the first video sequence is greater than 1, that is, the first video sequence is a video set including a plurality of videos, and may include two videos, three videos, five videos, and so on, and the specific number is not limited herein.
Specifically, the video sets including the multiple videos are arranged according to a certain order to form the first video sequence, where the multiple videos in the video sets may be arranged according to content similarity, may also be arranged according to the same or different of publishers, and may also be arranged according to an order of number of clicks of a user or a number of views of a user, it is understood that the order of arrangement of the multiple videos in the video sets is not fixed, but may be arranged according to any order, and is not limited herein.
In this embodiment, the first video sequence includes a first video, and the first video is not limited to the first video in the first video sequence, and may be any video in the first video sequence, which is not limited herein.
In this step, there are various ways to obtain the first video sequence, for example, a plurality of videos may be randomly selected from a video pool, or a plurality of videos may be selected from the video pool according to a preset rule, and then the first video sequence is composed, for example, one video of type a, two videos of type B, and 3 videos of type C are selected from the video pool, and then the first video sequence is composed.
In addition, the first video sequence may also be obtained according to a personalized recommendation algorithm, for example, the first video sequence is obtained through processes of recall, rough ranking, fine ranking, duplicate removal and the like in combination with features of the user and features of the video. The recall method includes methods such as collaborative filtering, FM (factor decomposition), FFM (Field-Aware decomposition), deep learning, and the like, and the processes such as coarse ranking, fine ranking, and the like may also include methods such as filtering, decomposition, and deep learning. The method in the embodiment can be adapted to various personalized recommendation methods, which is also an advantage of the embodiment.
In this embodiment, the step of interleaving the target popularization information in the first video sequence may include: and according to the determined target popularization information, determining a proper popularization information insertion position in the first video sequence, and inserting the target popularization information into the corresponding position.
When the target promotion information can also be temporarily undetermined promotion information, determining the target promotion information with proper insertion position in the target promotion information set according to the insertion positions in the first video sequence, and inserting the target promotion information into the corresponding positions.
In this embodiment, the second video sequence is a video sequence interspersed with the target popularization information, that is, the second video sequence may include a plurality of videos and at least one target popularization information, and the video sequence processing method described in this embodiment may be used for processing each interspersing of the target popularization information.
Wherein the target promotion information is adjacent to the first video. Preferably, the target promotion information is located after the first video.
In this step, according to the emotion characterization parameters of the target promotion information, the difference delta between the emotion characterization parameters of the target promotion information and the emotion characterization parameters of each video is calculated in a first video sequence in a sequence from front to back or in a sequence from back to front, and the video with the minimum delta is used as the first video, so that the position adjacent to the first video is used as the position meeting the requirement.
In this embodiment, the fact that the target popularization information is adjacent to the first video may mean that the target popularization information is located behind the first video, or that the target popularization information is located in front of the first video, which is not limited herein.
In this embodiment, the emotion characterization parameter is used to characterize a parameter that affects the emotion of the user by the video or the promotion information.
The emotion characterization parameters can represent the degree of influence of the video or the promotion information on the emotion of the user, and the emotion characterization parameters corresponding to different degrees are determined in a grading mode. The ranking may be represented by Arabic numerals, such as 1-10, or by English letters, such as A-F, or Roman numerals, such as I-V, it is to be understood that the ranking of the emotion characterization parameters is not limited thereto, and for the convenience of understanding, the following description will be made by using Arabic numerals for the emotion characterization parameters.
The emotion characterization parameters can include multiple dimensions, for example, emotion dimensions, which represent emotions expressed by videos or popularization information contents, for example, the emotion expressed by a laughing video is happy, the emotion expressed by a warning video is sad, the emotion expressed by a suspensory video is nervous, the emotion expressed by a horror video is panic, and the like; the picture dimension represents the frequency of picture or shot switching of the video or promotion information, for example, the moving video picture shot switching is fast, and the cate video picture shot is stable; as another example, the music dimension, representing the style of video or promotional information background music, is fierce or relaxed; for example, the color dimension indicates the chroma of the video or the promotion information screen, whether it is warm or cool, whether it is bright or dark, and the like. Based on the consideration of multiple dimensions, the video or the promotion information can be marked with different emotion characterization parameters in a more subdivided manner, for example, the emotion dimensionalities are divided into A1 and A2. ANG. An, the color dimensionalities are divided into D1 and D2. ANG. Dm, a plurality of emotion characterization parameters form An emotion characterization parameter set, and the video or the promotion information is marked with emotion labels in multiple dimensions. It is understood that the dimensions of the emotion characterization parameters are not limited thereto, and are not limited thereto.
In this embodiment, the emotion characterization parameters of each video or promotion information may be determined through an algorithm, the emotion characterization parameters may also be manually marked on the video or promotion information samples, a model may also be obtained through machine training, and then the emotion characterization parameters are automatically marked on all videos or promotion information through the model.
In addition, the emotion representation parameters of each video or promotion information may be acquired in advance, or may be acquired by temporarily using a model in this step.
If the emotion representation parameters of each video or the promotion information are obtained in advance, a label of the emotion representation parameter can be added to each video or the promotion information, and thus each video or the promotion information carries the label of the emotion representation parameter, and the corresponding emotion representation parameter can be obtained while the first video sequence and the target promotion information are obtained. By adding the label of the emotion representation parameter to the video or the promotion information, the convenience of obtaining the emotion representation parameter can be improved.
Of course, a database for storing the emotion characterization parameters may also be provided, where the emotion characterization parameters of each video or promotion information are stored in the database, and each emotion characterization parameter carries a video identifier or a promotion information identifier. For example, if the emotion representation parameter of the video a needs to be acquired, the video sequence processing apparatus searches the emotion representation parameter carrying the video a identifier from the database, and uses the emotion representation parameter as the emotion representation parameter of the video a.
The mode of obtaining the emotion representation parameters of each video or promotion information can be realized through a model. Specifically, each video or promotion information can be input into the model, the emotional feature extraction is performed on the picture content of the video, the music of the video or the color of the video through the model to obtain an emotional feature matrix, and the normalization processing is performed on the emotional feature matrix to obtain the emotional characterization parameters of each video or promotion information. The emotion characterization parameters of each video or promotion information obtained through model calculation can be stored in a database, and the data format in the database can be video identification, emotion characterization dimensionality and score. The model may be a model created in advance, or may be a model obtained by inputting video data into a convolutional neural network and training the convolutional neural network, which is not limited in this embodiment.
Various manners of obtaining the emotion representation parameters of each video or promotion information are not listed here.
In this embodiment, the preset amplitude is a preset threshold, and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is within the threshold range, which indicates that the target popularization information is similar to the emotion characterization of the first video, so that the target popularization information can be inserted into a position adjacent to the first video. Taking an arabic numeral to represent the emotion characterization parameter as an example, the threshold range may be 2, if the emotion characterization parameter of the target popularization information is 3, and the emotion characterization parameter of a certain video in the video sequence is 5, the video may be determined as a first video, and the target popularization information is inserted into a position adjacent to the first video. Here, the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the first video may be understood as a difference obtained by subtracting the emotion representation parameters of the two.
And step 103, outputting the second video sequence.
The video sequence processing method in this embodiment may be implemented by a video sequence processing apparatus, which may be a server or a client.
If the video sequence processing apparatus is a server, step 103 may be understood as that the server sends the second video sequence to the client. Specifically, after the server executes step 101 and step 102, a second video sequence interspersed with the target popularization information is generated, and the second video sequence is sent to the client for the user to watch.
If the video sequence processing apparatus is a client, step 103 may be understood as that the client plays the second video sequence. Specifically, after the client executes step 101 and step 102, a second video sequence interspersed with the target popularization information is generated, and the second video sequence is played for the user to watch.
According to the video sequence processing method, the video sequence processing device and the electronic equipment, the popularization information is inserted into the video sequence according to the emotion representation parameters of the video and the popularization information. Because the variation amplitude between the emotion characterization parameter of the promotion information and the emotion characterization parameter of the adjacent video is smaller than or equal to the preset amplitude, the embodiment of the invention can reduce the contrast between the promotion information and the video, improve the matching degree between the promotion information and the video and further improve the output effect of the video sequence. In this way, the user experience can be improved when the user watches the video sequence.
Referring to fig. 2, fig. 2 is a schematic diagram of another video sequence processing method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step 201, a first video sequence is obtained, where the first video sequence includes a first video and a second video, and the second video is adjacent to the first video.
In this embodiment, the second video and the first video are adjacent to each other, that is, the second video is located before the first video, or the second video is located after the first video, and preferably, the second video is located after the first video.
It should be noted that the second video is not limited to the second video in the first video sequence, and may be a video adjacent to any first video in the first video sequence, which is not limited herein.
The specific implementation process of this step may be as described with reference to step 101 in the embodiment shown in fig. 1, and is not described herein again.
202, interleaving target popularization information between the first video and the second video to form a second video sequence, wherein the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude, and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
In this embodiment, the step 202 at least includes the following two optional embodiments, and any one of the two optional embodiments may be selected. These two alternative embodiments are described below.
Optionally, the step 202 includes:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
In this embodiment, the target popularization information is one or more determined popularization information, a position where each target popularization information can be inserted into a first video sequence needs to be determined, and a video sequence processing device may obtain the target popularization information, an emotion of the target popularization information, and emotion characterization parameters of videos in the first video sequence, and determine an insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of the videos in the first video sequence.
The method for acquiring the target popularization information and the emotion characterization parameters of each video in the first video sequence may refer to the description of step 102 in the embodiment shown in fig. 1, and is not repeated here.
For the convenience of understanding the present embodiment, the following is specifically illustrated by example E1:
assuming that an emotion characterization parameter of a target popularization information a is 3 at present, the emotion characterization parameters of each video in the first video sequence are respectively: the video a is 4, the video B is 5, the video C is 1, the video D is 7, the first video sequence is arranged according to the sequence of the video A, B, C, D, and it is required to determine the position where the target popularization information a can be inserted in the first video sequence.
Assuming that the preset amplitude is 2, according to that the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, it may be determined that: the target popularization information a can be inserted between the video A and the video B, can also be inserted between the video B and the video C, but cannot be inserted between the video C and the video D.
In the embodiment of the invention, the position where the target popularization information can be interspersed can be searched in the first video sequence according to the determined target popularization information, so that the video sequence processing method is more flexible.
Optionally, the step 202 includes:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
In this embodiment, an insertion position where promotion information is to be inserted is determined in the first video sequence, target promotion information that can be inserted in the adjacent position of the first video needs to be determined for the adjacent position of the first video, and the video sequence processing device may obtain the target promotion information from the candidate promotion information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate promotion information, and insert the target promotion information into the first video sequence according to the insertion position to form a second video sequence.
When the first video sequence includes a second video, the video sequence processing apparatus further needs to select appropriate target promotion information from the target promotion information set according to the emotion characterization parameter of the second video.
In this embodiment, the determining manner of the insertion position where the promotion information is to be inserted in the first video sequence may include multiple manners: for example, the puncturing position where the promotion information is to be punctured may be determined according to the number of videos of the first video sequence, for example, a position in the middle of the first video sequence may be determined as the puncturing position where the promotion information is to be punctured, and if the number of videos of the first video sequence is 10, a position between the fifth video and the sixth video may be determined as the puncturing position where the promotion information is to be punctured; the insertion position of the popularization information to be inserted can be randomly determined; and so on.
The obtaining method of the emotion characterization parameters of the first video and the second video and the emotion characterization parameters of the candidate popularization information may refer to the description of step 102 in the embodiment shown in fig. 1, and details are not repeated here.
For the convenience of understanding the present embodiment, the following is specifically illustrated by example E2:
suppose that there exists a first video sequence, wherein the emotion characterization parameters of each video are: the video A is 4, the video B is 5, the video C is 1, the video D is 7, the first video sequence is arranged according to the sequence of the video A, B, C, D, and target popularization information which can be inserted between the video A and the video B needs to be determined.
If the preset amplitude is 2, according to that the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, it may be determined that: in the target popularization information set, target popularization information with emotion characterization parameters of 3-7 can be inserted between the video A and the video B.
In the embodiment of the invention, the target promotion information suitable for the insertion position can be found according to the determined insertion position in the first video sequence, so that the video sequence processing method is more flexible.
Optionally, in the first video sequence, the videos are sorted according to the sequence of the emotion representation parameters from small to large or from large to small; and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
In this embodiment, the emotion characterization parameters of each video in the first video sequence are ordered in sequence, that is, the emotion change of each video in the first video sequence in this embodiment is gradual and gradual, at this time, the target popularization information in which the emotion characterization parameters are located between the emotion characterization parameters of the first video and the emotion characterization parameters of the second video may be directly inserted, so that the effect of gradual and gradual emotion between the videos and the popularization information may be achieved.
It should be noted that, the emotion characterization parameters of each video in the first video sequence are sorted in order, taking arabic numerals as an example, the emotion characterization parameters may be sorted in a descending order or in an ascending order, and are not limited herein.
For the convenience of understanding the present embodiment, the following is specifically illustrated by example E3:
suppose that there exists a first video sequence, wherein the emotion characterization parameters of each video are: the method comprises the steps that a video A is 1, a video B is 4, a video C is 5, a video D is 7, a first video sequence is arranged according to the sequence of a video A, B, C, D, target popularization information with emotion characterization parameters of 1-4 can be inserted between the video A and the video B, and target popularization information with emotion characterization parameters of 5-7 can be inserted between the video C and the video D.
That is to say, in this embodiment, in a case that the emotion representation parameters of each video in the first video sequence are sorted in order, the change amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the first video does not only need to satisfy the condition that the change amplitude is smaller than or equal to the preset amplitude, but also the direction of the change of the emotion representation parameter between the target popularization information and the first video needs to be consistent with the change direction of the emotion representation parameter in the first video sequence.
the specific implementation process of this step may refer to the description of step 103 in the embodiment shown in fig. 1, and is not described herein again.
In the embodiment of the present invention, various optional implementations are added to the embodiment shown in fig. 1, so as to further improve the flexibility of the video sequence processing method.
Referring to fig. 3, fig. 3 is a structural diagram of a video sequence processing apparatus according to an embodiment of the present invention.
As shown in fig. 3, the video sequence processing apparatus 300 includes:
an obtaining module 301, configured to obtain a first video sequence, where the first video sequence includes a first video, and the number of videos in the first video sequence is greater than 1;
a puncturing module 302, configured to puncture target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
a sending module 303, configured to output the second video sequence.
Optionally, the puncturing module 302 is specifically configured to:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Or, optionally, the puncturing module 302 is specifically configured to:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the insertion position, inserting the target popularization information into the first video sequence to form a second video sequence.
Optionally, the first video sequence further includes a second video adjacent to the first video;
the threading module 302 is specifically configured to:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the second video is smaller than or equal to the preset amplitude.
Optionally, in the first video sequence, the videos are ordered according to the sequence of the emotion characterization parameters from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
It should be noted that, in the embodiment of the present invention, the video sequence processing apparatus 300 may be a server or a client. The video sequence processing apparatus 300 in the embodiment of the present invention may be a video sequence processing apparatus in any implementation manner in the method embodiment, and any implementation manner in the method embodiment may be implemented by the video sequence processing apparatus 300 in the embodiment of the present invention, and achieve the same beneficial effects, and in order to avoid repetition, details are not repeated here.
Referring to fig. 4, fig. 4 is a structural diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 4, the electronic device 600 includes a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604.
The memory 603 is used for storing a computer program.
When the processor 601 is used to execute the program stored in the memory 603, the following steps are implemented:
acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
outputting the second video sequence.
Optionally, when the processor 601 is configured to execute the program stored in the memory 603, the following steps are further implemented:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Optionally, when the processor 601 is configured to execute the program stored in the memory 603, the following steps are further implemented:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Optionally, the first video sequence further includes a second video adjacent to the first video;
when the processor 601 is used to execute the program stored in the memory 603, the following steps are also implemented:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
Optionally, in the first video sequence, the videos are sorted according to the sequence of the emotion representation parameters from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
The communication bus 604 mentioned in the electronic device 600 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 604 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 602 is used for communication between the electronic apparatus 600 and other apparatuses.
The Memory 603 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor 601 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute the video sequence processing method described in any of the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the video sequence processing method of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to be performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. A method for processing a video sequence, the method comprising:
acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
outputting the second video sequence;
the emotion characterization parameters are used for characterizing parameters of the video or promotion information influencing the emotion of the user, and comprise emotion dimensions, picture dimensions, music dimensions and color dimensions;
the method comprises the following steps of interleaving target popularization information in the first video sequence to form a second video sequence, wherein the steps comprise:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
2. The method of claim 1, wherein the step of interleaving the target promotion information in the first video sequence to form a second video sequence comprises:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
3. The video sequence processing method according to any of claims 1 to 2, wherein the first video sequence further comprises a second video adjacent to the first video;
the step of interleaving the target popularization information in the first video sequence to form a second video sequence comprises the following steps:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
4. The method of claim 3, wherein in the first video sequence, the videos are sorted in order of the emotion characterization parameter from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
5. A video sequence processing apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first video sequence, the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
the interleaving module is used for interleaving the target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
an output module for outputting the second video sequence;
the emotion characterization parameters are used for characterizing parameters of the video or promotion information influencing the emotion of the user, and comprise emotion dimensions, picture dimensions, music dimensions and color dimensions;
wherein, the interlude module is specifically configured to:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
6. The video sequence processing apparatus of claim 5, wherein the puncturing module is specifically configured to:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
7. The video sequence processing apparatus according to any one of claims 5 to 6, wherein the first video sequence further includes a second video adjacent to the first video;
the insertion module is specifically configured to:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
8. The video sequence processing apparatus according to claim 7, wherein in the first video sequence, the videos are sorted in order of the emotion characterization parameter from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 4 when executing a program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911214119.8A CN111050194B (en) | 2019-12-02 | 2019-12-02 | Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911214119.8A CN111050194B (en) | 2019-12-02 | 2019-12-02 | Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111050194A CN111050194A (en) | 2020-04-21 |
CN111050194B true CN111050194B (en) | 2022-05-17 |
Family
ID=70234265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911214119.8A Active CN111050194B (en) | 2019-12-02 | 2019-12-02 | Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111050194B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528071B (en) * | 2020-10-30 | 2024-07-23 | 百果园技术(新加坡)有限公司 | Video data ordering method and device, computer equipment and storage medium |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2844582A1 (en) * | 1996-12-19 | 1998-06-25 | Index Systems, Inc. | Epg with advertising inserts |
US9301020B2 (en) * | 2010-11-30 | 2016-03-29 | Google Technology Holdings LLC | Method of targeted ad insertion using HTTP live streaming protocol |
CN102495873B (en) * | 2011-11-30 | 2013-04-10 | 北京航空航天大学 | Video recommending method based on video affective characteristics and conversation models |
US9247225B2 (en) * | 2012-09-25 | 2016-01-26 | Intel Corporation | Video indexing with viewer reaction estimation and visual cue detection |
US20140101695A1 (en) * | 2012-09-27 | 2014-04-10 | Canoe Ventures, Llc | Auctioning for content on demand asset insertion |
EP3110158A1 (en) * | 2015-06-22 | 2016-12-28 | AD Insertion Platform Sarl | Method and platform for automatic selection of video sequences to fill a break in a broadcasted program |
CN106792003B (en) * | 2016-12-27 | 2020-04-14 | 西安石油大学 | Intelligent advertisement insertion method and device and server |
CN107066564B (en) * | 2017-03-31 | 2020-10-16 | 武汉斗鱼网络科技有限公司 | Data processing method and device based on android list |
CN110162664B (en) * | 2018-12-17 | 2021-05-25 | 腾讯科技(深圳)有限公司 | Video recommendation method and device, computer equipment and storage medium |
CN110263215B (en) * | 2019-05-09 | 2021-08-17 | 众安信息技术服务有限公司 | Video emotion positioning method and system |
CN110378732B (en) * | 2019-07-18 | 2023-01-06 | 腾讯科技(深圳)有限公司 | Information display method, information association method, device, equipment and storage medium |
-
2019
- 2019-12-02 CN CN201911214119.8A patent/CN111050194B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111050194A (en) | 2020-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108009293B (en) | Video tag generation method and device, computer equipment and storage medium | |
CN110582025B (en) | Method and apparatus for processing video | |
CN108540826B (en) | Bullet screen pushing method and device, electronic equipment and storage medium | |
CN109710841B (en) | Comment recommendation method and device | |
CN109657213B (en) | Text similarity detection method and device and electronic equipment | |
US9892109B2 (en) | Automatically coding fact check results in a web page | |
CN106326391B (en) | Multimedia resource recommendation method and device | |
US10565401B2 (en) | Sorting and displaying documents according to sentiment level in an online community | |
CN109271594B (en) | Recommendation method of electronic book, electronic equipment and computer storage medium | |
CN110929125B (en) | Search recall method, device, equipment and storage medium thereof | |
CN108874832B (en) | Target comment determination method and device | |
US10268897B2 (en) | Determining most representative still image of a video for specific user | |
CN109325146B (en) | Video recommendation method and device, storage medium and server | |
CN106339507B (en) | Streaming Media information push method and device | |
CN110856037B (en) | Video cover determination method and device, electronic equipment and readable storage medium | |
CN107229731B (en) | Method and apparatus for classifying data | |
CN109325121B (en) | Method and device for determining keywords of text | |
US9575996B2 (en) | Emotion image recommendation system and method thereof | |
CN108959329B (en) | Text classification method, device, medium and equipment | |
JP6776310B2 (en) | User-Real-time feedback information provision methods and systems associated with input content | |
CN108256044A (en) | Direct broadcasting room recommends method, apparatus and electronic equipment | |
CN112966081A (en) | Method, device, equipment and storage medium for processing question and answer information | |
CN105574030A (en) | Information search method and device | |
Alshehri et al. | Think before your click: Data and models for adult content in arabic twitter | |
CN106162351A (en) | A kind of video recommendation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |