CN111050194A - Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium - Google Patents

Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111050194A
CN111050194A CN201911214119.8A CN201911214119A CN111050194A CN 111050194 A CN111050194 A CN 111050194A CN 201911214119 A CN201911214119 A CN 201911214119A CN 111050194 A CN111050194 A CN 111050194A
Authority
CN
China
Prior art keywords
video
video sequence
emotion
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911214119.8A
Other languages
Chinese (zh)
Other versions
CN111050194B (en
Inventor
陆瀛海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201911214119.8A priority Critical patent/CN111050194B/en
Publication of CN111050194A publication Critical patent/CN111050194A/en
Application granted granted Critical
Publication of CN111050194B publication Critical patent/CN111050194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a video sequence processing method, a video sequence processing device, electronic equipment and a computer readable storage medium, wherein the video sequence processing method comprises the following steps: acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1; interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude; outputting the second video sequence. The method and the device can reduce the contrast between the promotion information and the video, improve the matching degree between the promotion information and the video, and further improve the output effect of the video sequence.

Description

Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video sequence processing method, a video sequence processing apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, the internet is entering the video age. With the increasing rise of short video information streams, internet promotion information is also increasingly combined with short video information streams. At present, the promotion information is generally interleaved in the short video information stream in a random interleaving manner. The random insertion mode can cause the promotion information and the short video information flow to have larger contrast, so that the matching degree between the promotion information and the short video information flow is lower, and the output effect of the short video information flow is poorer.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a video sequence processing method, a video sequence processing apparatus, an electronic device, and a computer-readable storage medium, so as to solve the problem that the output effect of a short video information stream is poor due to a low matching degree between promotion information and a short video information stream caused by random insertion of the promotion information in the short video information stream. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a method for processing a video sequence, the method including:
acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
outputting the second video sequence.
In a second aspect of the present invention, there is also provided a video sequence processing apparatus, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first video sequence, the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
the interleaving module is used for interleaving the target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
and the output module is used for outputting the second video sequence.
In a third aspect of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
the processor is configured to implement the method steps described in the first aspect of the embodiment of the present invention when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute any of the above-described video sequence processing methods.
In yet another aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the video sequence processing methods described above.
According to the video sequence processing method, the video sequence processing device, the electronic equipment and the computer readable storage medium, the popularization information is inserted into the video sequence according to the emotion representation parameters of the video and the popularization information. Because the variation amplitude between the emotion characterization parameter of the promotion information and the emotion characterization parameter of the adjacent video is smaller than or equal to the preset amplitude, the contrast between the promotion information and the video can be reduced, the matching degree between the promotion information and the video is improved, and the output effect of the video sequence is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart of a video sequence processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another video sequence processing method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a video sequence processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Referring to fig. 1, fig. 1 is a video sequence processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step 101, obtaining a first video sequence, where the first video sequence includes a first video, and the number of videos in the first video sequence is greater than 1.
In this embodiment, the number of videos in the first video sequence is greater than 1, that is, the first video sequence is a video set including a plurality of videos, and may include two videos, three videos, five videos, and so on, and the specific number is not limited herein.
Specifically, the video sets including the multiple videos are arranged according to a certain order to form the first video sequence, where the multiple videos in the video sets may be arranged according to content similarity, may also be arranged according to the same or different of publishers, and may also be arranged according to an order of number of clicks of a user or a number of views of a user, it is understood that the order of arrangement of the multiple videos in the video sets is not fixed, but may be arranged according to any order, and is not limited herein.
In this embodiment, the first video sequence includes a first video, and the first video is not limited to the first video in the first video sequence, and may be any video in the first video sequence, which is not limited herein.
In this step, there are various ways to obtain the first video sequence, for example, a plurality of videos may be randomly selected from a video pool, or a plurality of videos may be selected from the video pool according to a preset rule, and then the first video sequence is composed, for example, one video of type a, two videos of type B, and 3 videos of type C are selected from the video pool, and then the first video sequence is composed.
In addition, the first video sequence may also be obtained according to a personalized recommendation algorithm, for example, the first video sequence is obtained through processes of recall, rough ranking, fine ranking, duplicate removal and the like in combination with features of the user and features of the video. The recall method includes methods such as collaborative filtering, FM (factor decomposition), FFM (Field-Aware Factorization), deep learning, and the like, and the processes such as coarse ranking and fine ranking may also include methods such as filtering, decomposition, and deep learning. The method in the embodiment can be adapted to various personalized recommendation methods, which is also an advantage of the embodiment.
Step 102, interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude.
In this embodiment, the step of interleaving the target popularization information in the first video sequence may include: and according to the determined target popularization information, determining a proper popularization information insertion position in the first video sequence, and inserting the target popularization information into the corresponding position.
When the target promotion information can also be temporarily undetermined promotion information, determining the target promotion information with proper insertion position in the target promotion information set according to the insertion positions in the first video sequence, and inserting the target promotion information into the corresponding positions.
In this embodiment, the second video sequence is a video sequence interspersed with the target popularization information, that is, the second video sequence may include a plurality of videos and at least one target popularization information, and the video sequence processing method described in this embodiment may be used for processing each interspersing of the target popularization information.
Wherein the target promotion information is adjacent to the first video. Preferably, the target promotion information is located after the first video.
In this step, according to the emotion characterization parameters of the target promotion information, the difference delta between the emotion characterization parameters of the target promotion information and the emotion characterization parameters of each video is calculated in a first video sequence in a sequence from front to back or in a sequence from back to front, and the video with the minimum delta is used as the first video, so that the position adjacent to the first video is used as the position meeting the requirement.
In this embodiment, the fact that the target popularization information is adjacent to the first video may mean that the target popularization information is located behind the first video, or that the target popularization information is located in front of the first video, which is not limited herein.
In this embodiment, the emotion characterization parameter is used to characterize a parameter that affects the emotion of the user by the video or the promotion information.
The emotion characterization parameters can represent the degree of influence of the video or the promotion information on the emotion of the user, and the emotion characterization parameters corresponding to different degrees are determined in a grading mode. The ranking may be represented by Arabic numerals, such as 1-10, or by English letters, such as A-F, or Roman numerals, such as I-V, it is to be understood that the ranking of the emotion characterization parameters is not limited thereto, and for the convenience of understanding, the following description will be made by using Arabic numerals for the emotion characterization parameters.
The emotion characterization parameters can include multiple dimensions, for example, emotion dimensions, which represent emotions expressed by videos or popularization information contents, for example, the emotion expressed by a laughing video is happy, the emotion expressed by a warning video is sad, the emotion expressed by a suspensory video is nervous, the emotion expressed by a horror video is panic, and the like; the picture dimension represents the frequency of picture or shot switching of the video or promotion information, for example, the moving video picture shot switching is fast, and the cate video picture shot is stable; as another example, the music dimension, representing the style of video or promotional information background music, is fierce or relaxed; for example, the color dimension indicates the chroma of the video or the promotion information screen, whether it is warm or cool, whether it is bright or dark, and the like. Based on the consideration of multiple dimensions, different emotion characterization parameters can be marked in a manner that videos or popularization information is more subdivided, for example, emotion dimensions are divided into A1 and A2. ANG. An, color dimensions are divided into D1 and D2. ANG. Dm, a plurality of emotion characterization parameters form An emotion characterization parameter set, and emotion labels of videos or popularization information are marked in multiple dimensions. It is understood that the dimensions of the emotion characterization parameters are not limited thereto, and are not limited thereto.
In this embodiment, the emotion characterization parameters of each video or promotion information may be determined through an algorithm, the emotion characterization parameters may also be manually marked on the video or promotion information samples, a model may also be obtained through machine training, and then the emotion characterization parameters are automatically marked on all videos or promotion information through the model.
In addition, the emotion representation parameters of each video or promotion information may be acquired in advance, or may be acquired by temporarily using a model in this step.
If the emotion representation parameters of each video or the promotion information are obtained in advance, a label of the emotion representation parameter can be added to each video or the promotion information, and thus each video or the promotion information carries the label of the emotion representation parameter, and the corresponding emotion representation parameter can be obtained while the first video sequence and the target promotion information are obtained. By adding the label of the emotion representation parameter to the video or the promotion information, the convenience of obtaining the emotion representation parameter can be improved.
Of course, a database for storing the emotion characterization parameters may also be provided, where the emotion characterization parameters of each video or promotion information are stored in the database, and each emotion characterization parameter carries a video identifier or a promotion information identifier. For example, if the emotion representation parameter of the video a needs to be acquired, the video sequence processing apparatus searches the emotion representation parameter carrying the video a identifier from the database, and uses the emotion representation parameter as the emotion representation parameter of the video a.
The mode of obtaining the emotion representation parameters of each video or promotion information can be realized through a model. Specifically, each video or promotion information can be input into the model, the emotional feature extraction is performed on the picture content of the video, the music of the video or the color of the video through the model to obtain an emotional feature matrix, and the normalization processing is performed on the emotional feature matrix to obtain the emotional characterization parameters of each video or promotion information. The emotion characterization parameters of each video or promotion information obtained through model calculation can be stored in a database, and the data format in the database can be video identification, emotion characterization dimensionality and score. The model may be a model created in advance, or may be a model obtained by inputting video data into a convolutional neural network and training the convolutional neural network, which is not limited in this embodiment.
Various manners of obtaining the emotion representation parameters of each video or promotion information are not listed here.
In this embodiment, the preset amplitude is a preset threshold, and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is within the threshold range, which indicates that the target popularization information is similar to the emotion characterization of the first video, so that the target popularization information can be inserted into a position adjacent to the first video. Taking an arabic numeral to represent the emotion characterization parameter as an example, the threshold range may be 2, if the emotion characterization parameter of the target popularization information is 3, and the emotion characterization parameter of a certain video in the video sequence is 5, the video may be determined as a first video, and the target popularization information is inserted into a position adjacent to the first video. Here, the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the first video may be understood as a difference obtained by subtracting the emotion representation parameters of the two.
And step 103, outputting the second video sequence.
The video sequence processing method in this embodiment may be implemented by a video sequence processing apparatus, which may be a server or a client.
If the video sequence processing apparatus is a server, step 103 may be understood as that the server sends the second video sequence to the client. Specifically, after the server executes step 101 and step 102, a second video sequence interspersed with the target popularization information is generated, and the second video sequence is sent to the client for the user to watch.
If the video sequence processing apparatus is a client, step 103 may be understood as that the client plays the second video sequence. Specifically, after the client executes step 101 and step 102, a second video sequence interspersed with the target popularization information is generated, and the second video sequence is played for the user to watch.
According to the video sequence processing method, the video sequence processing device and the electronic equipment, the popularization information is inserted into the video sequence according to the emotion representation parameters of the video and the popularization information. Because the variation amplitude between the emotion characterization parameter of the promotion information and the emotion characterization parameter of the adjacent video is smaller than or equal to the preset amplitude, the embodiment of the invention can reduce the contrast between the promotion information and the video, improve the matching degree between the promotion information and the video and further improve the output effect of the video sequence. In this way, the user experience can be improved when the user watches the video sequence.
Referring to fig. 2, fig. 2 is a schematic diagram of another video sequence processing method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step 201, a first video sequence is obtained, where the first video sequence includes a first video and a second video, and the second video is adjacent to the first video.
In this embodiment, the second video and the first video are adjacent to each other, that is, the second video is located before the first video, or the second video is located after the first video, and preferably, the second video is located after the first video.
It should be noted that the second video is not limited to the second video in the first video sequence, and may be a video adjacent to any first video in the first video sequence, which is not limited herein.
The specific implementation process of this step may be as described with reference to step 101 in the embodiment shown in fig. 1, and is not described herein again.
202, interleaving target popularization information between the first video and the second video to form a second video sequence, wherein the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude, and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
In this embodiment, the step 202 at least includes the following two optional embodiments, and any one of the two optional embodiments may be selected. These two alternative embodiments are described below.
Optionally, the step 202 includes:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
In this embodiment, the target popularization information is one or more determined popularization information, a position where each target popularization information can be inserted into a first video sequence needs to be determined, and a video sequence processing device may obtain the target popularization information, an emotion of the target popularization information, and emotion characterization parameters of videos in the first video sequence, and determine an insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of the videos in the first video sequence.
The method for acquiring the target popularization information and the emotion characterization parameters of each video in the first video sequence may refer to the description of step 102 in the embodiment shown in fig. 1, and is not repeated here.
For the convenience of understanding the present embodiment, the following is specifically illustrated by example E1:
assuming that an emotion characterization parameter of a target popularization information a is 3 at present, the emotion characterization parameters of each video in the first video sequence are respectively: the video a is 4, the video B is 5, the video C is 1, the video D is 7, the first video sequence is arranged according to the sequence of the video A, B, C, D, and it is required to determine the position where the target popularization information a can be inserted in the first video sequence.
Assuming that the preset amplitude is 2, according to that the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, it may be determined that: the target popularization information a can be inserted between the video A and the video B, can also be inserted between the video B and the video C, but cannot be inserted between the video C and the video D.
In the embodiment of the invention, the position where the target popularization information can be interspersed can be searched in the first video sequence according to the determined target popularization information, so that the video sequence processing method is more flexible.
Optionally, the step 202 includes:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
In this embodiment, an insertion position where promotion information is to be inserted is determined in the first video sequence, target promotion information that can be inserted in the adjacent position of the first video needs to be determined for the adjacent position of the first video, and the video sequence processing device may obtain the target promotion information from the candidate promotion information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate promotion information, and insert the target promotion information into the first video sequence according to the insertion position to form a second video sequence.
When the first video sequence includes a second video, the video sequence processing apparatus further needs to select appropriate target promotion information from the target promotion information set according to the emotion characterization parameter of the second video.
In this embodiment, the determining manner of the insertion position where the promotion information is to be inserted in the first video sequence may include multiple manners: for example, the insertion position of the promotion information to be inserted may be determined according to the number of videos of the first video sequence, for example, a position in the middle of the first video sequence may be determined as the insertion position of the promotion information to be inserted, and if the number of videos of the first video sequence is 10, a position between the fifth video and the sixth video may be determined as the insertion position of the promotion information to be inserted; the insertion position of the popularization information to be inserted can be randomly determined; and so on.
The obtaining method of the emotion characterization parameters of the first video and the second video and the emotion characterization parameters of the candidate popularization information may refer to the description of step 102 in the embodiment shown in fig. 1, and details are not repeated here.
For the convenience of understanding the present embodiment, the following is specifically illustrated by example E2:
suppose that there exists a first video sequence, wherein the emotion characterization parameters of each video are: the video A is 4, the video B is 5, the video C is 1, the video D is 7, the first video sequence is arranged according to the sequence of the video A, B, C, D, and target popularization information which can be inserted between the video A and the video B needs to be determined.
Assuming that the preset amplitude is 2, according to that the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, and the variation amplitude between the emotion characterization parameter of the target popularization information and the emotion characterization parameter of the first video is smaller than or equal to the preset amplitude, it may be determined that: in the target popularization information set, target popularization information with emotion characterization parameters of 3-7 can be inserted between the video A and the video B.
In the embodiment of the invention, the target promotion information suitable for the insertion position can be found according to the determined insertion position in the first video sequence, so that the video sequence processing method is more flexible.
Optionally, in the first video sequence, the videos are sorted according to the sequence of the emotion representation parameters from small to large or from large to small; and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
In this embodiment, the emotion characterization parameters of each video in the first video sequence are ordered in sequence, that is, the emotion change of each video in the first video sequence in this embodiment is gradual and gradual, at this time, the target popularization information in which the emotion characterization parameters are located between the emotion characterization parameters of the first video and the emotion characterization parameters of the second video may be directly inserted, so that the effect of gradual and gradual emotion between the videos and the popularization information may be achieved.
It should be noted that, the emotion characterization parameters of each video in the first video sequence are sorted in order, taking arabic numerals as an example, the emotion characterization parameters may be sorted in a descending order or in an ascending order, and are not limited herein.
For the convenience of understanding the present embodiment, the following is specifically illustrated by example E3:
suppose that there exists a first video sequence, wherein the emotion characterization parameters of each video are: the method comprises the steps that a video A is 1, a video B is 4, a video C is 5, a video D is 7, a first video sequence is arranged according to the sequence of a video A, B, C, D, target popularization information with emotion characterization parameters of 1-4 can be inserted between the video A and the video B, and target popularization information with emotion characterization parameters of 5-7 can be inserted between the video C and the video D.
That is to say, in this embodiment, in a case that the emotion representation parameters of each video in the first video sequence are sorted in order, the change amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the first video does not only need to satisfy the condition that the change amplitude is smaller than or equal to the preset amplitude, but also the direction of the change of the emotion representation parameter between the target popularization information and the first video needs to be consistent with the change direction of the emotion representation parameter in the first video sequence.
Step 203, outputting the second video sequence;
the specific implementation process of this step may refer to the description of step 103 in the embodiment shown in fig. 1, and is not described herein again.
In the embodiment of the present invention, various optional implementations are added to the embodiment shown in fig. 1, so as to further improve the flexibility of the video sequence processing method.
Referring to fig. 3, fig. 3 is a structural diagram of a video sequence processing apparatus according to an embodiment of the present invention.
As shown in fig. 3, the video sequence processing apparatus 300 includes:
an obtaining module 301, configured to obtain a first video sequence, where the first video sequence includes a first video, and the number of videos in the first video sequence is greater than 1;
a puncturing module 302, configured to puncture target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
a sending module 303, configured to output the second video sequence.
Optionally, the puncturing module 302 is specifically configured to:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Or, optionally, the puncturing module 302 is specifically configured to:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Optionally, the first video sequence further includes a second video adjacent to the first video;
the threading module 302 is specifically configured to:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
Optionally, in the first video sequence, the videos are sorted according to the sequence of the emotion representation parameters from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
It should be noted that, in the embodiment of the present invention, the video sequence processing apparatus 300 may be a server or a client. The video sequence processing apparatus 300 in the embodiment of the present invention may be a video sequence processing apparatus in any implementation manner in the method embodiment, and any implementation manner in the method embodiment may be implemented by the video sequence processing apparatus 300 in the embodiment of the present invention, and achieve the same beneficial effects, and in order to avoid repetition, details are not repeated here.
Referring to fig. 4, fig. 4 is a structural diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 4, the electronic device 600 includes a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604.
The memory 603 is used for storing a computer program.
When the processor 601 is used to execute the program stored in the memory 603, the following steps are implemented:
acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
outputting the second video sequence.
Optionally, when the processor 601 is configured to execute the program stored in the memory 603, the following steps are further implemented:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Optionally, when the processor 601 is configured to execute the program stored in the memory 603, the following steps are further implemented:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
Optionally, the first video sequence further includes a second video adjacent to the first video;
when the processor 601 is used to execute the program stored in the memory 603, the following steps are also implemented:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
Optionally, in the first video sequence, the videos are sorted according to the sequence of the emotion representation parameters from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
The communication bus 604 mentioned in the electronic device 600 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 604 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 602 is used for communication between the electronic apparatus 600 and other apparatuses.
The Memory 603 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor 601 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute the video sequence processing method described in any of the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the video sequence processing method of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (12)

1. A method for processing a video sequence, the method comprising:
acquiring a first video sequence, wherein the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
interleaving target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
outputting the second video sequence.
2. The method of claim 1, wherein the step of interleaving the target promotion information in the first video sequence to form a second video sequence comprises:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
3. The method of claim 1, wherein the step of interleaving the target promotion information in the first video sequence to form a second video sequence comprises:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
4. The video sequence processing method according to any one of claims 1 to 3, wherein the first video sequence further includes a second video adjacent to the first video;
the method comprises the following steps of interleaving target popularization information in the first video sequence to form a second video sequence, wherein the steps comprise:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
5. The method of claim 4, wherein in the first video sequence, the videos are sorted in order of the emotion characterization parameter from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
6. A video sequence processing apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first video sequence, the first video sequence comprises a first video, and the number of videos in the first video sequence is greater than 1;
the interleaving module is used for interleaving the target popularization information in the first video sequence to form a second video sequence; in the second video sequence, the target promotion information is adjacent to the first video, and the variation amplitude between the emotion representation parameter of the target promotion information and the emotion representation parameter of the first video is smaller than or equal to a preset amplitude;
and the output module is used for outputting the second video sequence.
7. The video sequence processing apparatus of claim 6, wherein the puncturing module is specifically configured to:
acquiring target popularization information, emotion characterization parameters of the target popularization information and emotion characterization parameters of each video in the first video sequence;
determining the insertion position of the target popularization information in the first video sequence according to the emotion characterization parameters of the target popularization information and the emotion characterization parameters of each video in the first video sequence;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
8. The video sequence processing apparatus of claim 6, wherein the puncturing module is specifically configured to:
determining an insertion position of popularization information to be inserted in the first video sequence, wherein the insertion position is a position adjacent to the first video;
acquiring target popularization information according to the emotion characterization parameters of the first video and the emotion characterization parameters of the candidate popularization information;
and according to the interleaving position, interleaving the target popularization information in the first video sequence to form a second video sequence.
9. The video sequence processing apparatus according to any one of claims 6 to 8, wherein the first video sequence further includes a second video adjacent to the first video;
the insertion module is specifically configured to:
interleaving target popularization information between the first video and the second video to form a second video sequence;
and the variation amplitude between the emotion representation parameter of the target popularization information and the emotion representation parameter of the second video is smaller than or equal to the preset amplitude.
10. The video sequence processing apparatus according to claim 9, wherein in the first video sequence, the videos are sorted in order of the emotion characterization parameter from small to large or from large to small;
and the emotion representation parameter of the target popularization information is positioned between the emotion representation parameter of the first video and the emotion representation parameter of the second video.
11. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN201911214119.8A 2019-12-02 2019-12-02 Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium Active CN111050194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911214119.8A CN111050194B (en) 2019-12-02 2019-12-02 Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911214119.8A CN111050194B (en) 2019-12-02 2019-12-02 Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111050194A true CN111050194A (en) 2020-04-21
CN111050194B CN111050194B (en) 2022-05-17

Family

ID=70234265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911214119.8A Active CN111050194B (en) 2019-12-02 2019-12-02 Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111050194B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089467A1 (en) * 2020-10-30 2022-05-05 百果园技术(新加坡)有限公司 Video data sorting method and apparatus, computer device, and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1246235A (en) * 1996-12-19 2000-03-01 英戴克系统公司 EPG with advertising inserts
CN102495873A (en) * 2011-11-30 2012-06-13 北京航空航天大学 Video recommending method based on video affective characteristics and conversation models
CN103385008A (en) * 2010-11-30 2013-11-06 摩托罗拉移动有限责任公司 A method of targeted ad insertion using HTTP live streaming protocol
US20140101695A1 (en) * 2012-09-27 2014-04-10 Canoe Ventures, Llc Auctioning for content on demand asset insertion
CN104541514A (en) * 2012-09-25 2015-04-22 英特尔公司 Video indexing with viewer reaction estimation and visual cue detection
EP3110158A1 (en) * 2015-06-22 2016-12-28 AD Insertion Platform Sarl Method and platform for automatic selection of video sequences to fill a break in a broadcasted program
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN107066564A (en) * 2017-03-31 2017-08-18 武汉斗鱼网络科技有限公司 A kind of data processing method and device based on Android list
CN110162664A (en) * 2018-12-17 2019-08-23 腾讯科技(深圳)有限公司 Video recommendation method, device, computer equipment and storage medium
CN110263215A (en) * 2019-05-09 2019-09-20 众安信息技术服务有限公司 A kind of video feeling localization method and system
CN110378732A (en) * 2019-07-18 2019-10-25 腾讯科技(深圳)有限公司 Information display method, information correlation method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1246235A (en) * 1996-12-19 2000-03-01 英戴克系统公司 EPG with advertising inserts
CN103385008A (en) * 2010-11-30 2013-11-06 摩托罗拉移动有限责任公司 A method of targeted ad insertion using HTTP live streaming protocol
CN102495873A (en) * 2011-11-30 2012-06-13 北京航空航天大学 Video recommending method based on video affective characteristics and conversation models
CN104541514A (en) * 2012-09-25 2015-04-22 英特尔公司 Video indexing with viewer reaction estimation and visual cue detection
US20140101695A1 (en) * 2012-09-27 2014-04-10 Canoe Ventures, Llc Auctioning for content on demand asset insertion
EP3110158A1 (en) * 2015-06-22 2016-12-28 AD Insertion Platform Sarl Method and platform for automatic selection of video sequences to fill a break in a broadcasted program
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN107066564A (en) * 2017-03-31 2017-08-18 武汉斗鱼网络科技有限公司 A kind of data processing method and device based on Android list
CN110162664A (en) * 2018-12-17 2019-08-23 腾讯科技(深圳)有限公司 Video recommendation method, device, computer equipment and storage medium
CN110263215A (en) * 2019-05-09 2019-09-20 众安信息技术服务有限公司 A kind of video feeling localization method and system
CN110378732A (en) * 2019-07-18 2019-10-25 腾讯科技(深圳)有限公司 Information display method, information correlation method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王方圆 等: ""基于时空灰度序特征的视频片段定位算法"", 《软件学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089467A1 (en) * 2020-10-30 2022-05-05 百果园技术(新加坡)有限公司 Video data sorting method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN111050194B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN108009293B (en) Video tag generation method and device, computer equipment and storage medium
CN108540826B (en) Bullet screen pushing method and device, electronic equipment and storage medium
CN109710841B (en) Comment recommendation method and device
CN109657213B (en) Text similarity detection method and device and electronic equipment
US10565401B2 (en) Sorting and displaying documents according to sentiment level in an online community
CN110582025A (en) Method and apparatus for processing video
CN110929125B (en) Search recall method, device, equipment and storage medium thereof
CN109325146B (en) Video recommendation method and device, storage medium and server
CN109271594B (en) Recommendation method of electronic book, electronic equipment and computer storage medium
US20140095308A1 (en) Advertisement distribution apparatus and advertisement distribution method
US20180276478A1 (en) Determining Most Representative Still Image of a Video for Specific User
JP6776310B2 (en) User-Real-time feedback information provision methods and systems associated with input content
CN108197336B (en) Video searching method and device
CN108256044A (en) Direct broadcasting room recommends method, apparatus and electronic equipment
CN107977678A (en) Method and apparatus for output information
CN112966081A (en) Method, device, equipment and storage medium for processing question and answer information
Alshehri et al. Think before your click: Data and models for adult content in arabic twitter
Bost et al. Extraction and analysis of dynamic conversational networks from tv series
CN110209780B (en) Question template generation method and device, server and storage medium
CN113094543B (en) Music authentication method, device, equipment and medium
CN111050194B (en) Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium
CN113407775B (en) Video searching method and device and electronic equipment
US20190121911A1 (en) Cognitive content suggestive sharing and display decay
TWI575391B (en) Social data filtering system, method and non-transitory computer readable storage medium of the same
CN112163415A (en) User intention identification method and device for feedback content and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant