CN113784174A - Method, device, electronic equipment and medium for generating video preview dynamic image - Google Patents

Method, device, electronic equipment and medium for generating video preview dynamic image Download PDF

Info

Publication number
CN113784174A
CN113784174A CN202110085485.9A CN202110085485A CN113784174A CN 113784174 A CN113784174 A CN 113784174A CN 202110085485 A CN202110085485 A CN 202110085485A CN 113784174 A CN113784174 A CN 113784174A
Authority
CN
China
Prior art keywords
target video
video
weight distribution
user
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110085485.9A
Other languages
Chinese (zh)
Other versions
CN113784174B (en
Inventor
莫文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202110085485.9A priority Critical patent/CN113784174B/en
Publication of CN113784174A publication Critical patent/CN113784174A/en
Application granted granted Critical
Publication of CN113784174B publication Critical patent/CN113784174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method, an apparatus, an electronic device and a medium for generating a video preview dynamic image are provided, wherein the method comprises the following steps: the method comprises the steps of obtaining a user event occurring in the process of playing a target video and a video position corresponding to the user event, wherein the user event comprises at least one user behavior related to the target video interestingness. The method further comprises the following steps: and determining the event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors in the user event and the video position corresponding to the user event. The method further comprises the following steps: and determining a target video frame segment used for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image.

Description

Method, device, electronic equipment and medium for generating video preview dynamic image
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for generating a video preview dynamic image.
Background
As the use of video content becomes more widespread, it is often desirable to provide a way to quickly preview video in the face of large amounts of video content. Generally, the video to be browsed by the user is a part of fragments of the original video, which are intercepted on the basis of the original video, so that the user can browse the video quickly.
In the course of implementing the disclosed concept, the inventors found that there are at least the following problems in the prior art: (1) in some modes of extracting video clips to generate video preview dynamic images, key information needs to be extracted after an original video file is processed, and the processing process is complex and consumes resources; (2) some video clips generated by a video clip extraction method cannot cover points which are interested by a user or omit important contents of the video.
Disclosure of Invention
In view of the above, the present disclosure provides a method, an apparatus, an electronic device, and a medium for generating a video preview dynamic image.
One aspect of the present disclosure provides a method of generating a video preview dynamic image. The method comprises the following steps: the method comprises the steps of obtaining a user event occurring in the process of playing a target video and a video position corresponding to the user event, wherein the user event comprises at least one user behavior related to the target video interestingness. The method further comprises the following steps: and determining the event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors in the user event and the video position corresponding to the user event. The method further comprises the following steps: and determining a target video frame segment used for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image.
According to the embodiment of the disclosure, acquiring a user event occurring in a target video playing process and a video position corresponding to the user event includes: and acquiring user events generated in at least one target video playing process monitored by M terminal devices and video positions corresponding to the user events, wherein M is more than or equal to 1.
According to the embodiment of the disclosure, determining the event weight distribution of the target video according to the type, the number and the weight of the user behavior in the user event and the video position corresponding to the user event comprises: and determining the type and the number of user behaviors aiming at each video position of the target video where the user event occurs. Determining event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors in the user event and the video position corresponding to the user event further comprises: and determining the total weight of each video position where the user event occurs according to the weights aiming at different user behavior types and the number of the user behaviors. Determining event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors in the user event and the video position corresponding to the user event further comprises: and determining the event weight distribution of the target video according to each video position where the user event occurs and the total weight of each video position.
According to an embodiment of the present disclosure, the determining a target video frame segment for generating a video preview dynamic image in a target video according to an event weight distribution of the target video and a set duration of a preview image includes: and determining the maximum value of the weight distribution of the target video according to the event weight distribution of the target video. Determining a target video frame segment for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image further comprises: and determining one or more reference position points according to the video position corresponding to the weight distribution maximum value of the target video. Determining a target video frame segment for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image further comprises: and determining a target video frame segment in the target video according to the reference position point and the set time length of the preview, wherein the reference position point is positioned in the target video frame segment.
According to an embodiment of the present disclosure, the determining a reference position point according to a video position corresponding to a maximum value of the weight distribution of the target video includes: if the only weight distribution maximum value exists in the target video, determining the video position corresponding to the only weight distribution maximum value as a reference position point; and if a plurality of weight distribution maxima exist in the target video, determining the reference position point according to the distribution intensity near the weight distribution maxima of the target video and/or the size of the weight distribution maxima.
According to an embodiment of the present disclosure, the determining a reference position point according to the distribution density near the weight distribution maximum of the target video and/or the size of the weight distribution maximum includes: under the condition that a plurality of weight distribution maxima exist, determining a video position corresponding to one or higher T weight distribution maxima with highest distribution intensity in the plurality of weight distribution maxima in the target video as a reference position point, wherein T is more than or equal to 2; or under the condition that a plurality of weight distribution maximum values exist, determining one or more S weight distribution maximum values with the largest value in the plurality of weight distribution maximum values in the target video as a reference position point, wherein S is more than or equal to 2; or under the condition that a plurality of weight distribution maximum values exist, determining a video position corresponding to one weight distribution maximum value which is the highest in distribution density and the largest in value in the plurality of weight distribution maximum values in the target video as a reference position point, or determining video positions corresponding to V weight distribution maximum values which are higher in distribution density and the larger in value as reference position points, wherein V is more than or equal to 2.
According to an embodiment of the present disclosure, the determining a target video frame segment in a target video according to a reference position point and a preview setting duration includes: in the case where the number of the reference position points is plural, the allocation duration of each of the plural reference position points is determined. The sum of the distribution time lengths of all the reference position points is the set time length of the preview image. The determining the target video frame segment in the target video according to the reference position point and the set time length of the preview further includes: and determining the alternative video frame segment corresponding to each reference position point based on each reference position point and the corresponding distribution time length. The determining the target video frame segment in the target video according to the reference position point and the set time length of the preview further includes: and splicing the alternative video frame segments corresponding to all the reference position points to obtain a target video frame segment.
A second aspect of the present disclosure provides an apparatus for generating a video preview dynamic image. The above-mentioned device includes: the device comprises a user event information acquisition module, an event weight distribution determination module and a target video frame segment determination module. The user event information acquisition module is used for acquiring a user event occurring in the target video playing process and a video position corresponding to the user event. The user event comprises at least one user behavior related to the target video interestingness. And the event weight distribution determining module is used for determining the event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors. And the target video frame segment determining module is used for determining a target video frame segment used for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image.
A third aspect of the present disclosure provides an electronic device. The electronic device includes: one or more processors; and storage means for storing one or more programs. Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods described above.
A fourth aspect of the present disclosure also provides a computer-readable storage medium. The above-described computer-readable storage medium has stored thereon executable instructions that, when executed by a processor, cause the processor to implement any of the methods described above.
A fifth aspect of the disclosure also provides a computer program product. The computer program product comprises a computer program containing a program code for executing the method provided by the embodiment of the disclosure, and when the computer program product runs on an electronic device, the program code is used for causing the electronic device to implement the method for generating the video preview dynamic image provided by the embodiment of the disclosure.
According to the embodiment of the disclosure, the event weight distribution of the target video is determined according to the user event occurring in the playing process of the target video and the video position corresponding to the user event, and the event weight distribution can represent the user interest degree of each playing position of the target video, so that the target video frame segment determined based on the event weight distribution of the target video and the set duration of the preview image is a segment which is relatively interested by the user, most segments which are interested by the user can also reflect the core information of the video, the problem that the generated video segment cannot cover the points which are interested by the user or omit the important content of the video can be at least partially solved, the video content does not need to be analyzed, and the resource consumption of the processing process is saved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates a system architecture of a method and apparatus for generating a video preview dynamic graph according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of generating a video preview dynamic graph according to an embodiment of the present disclosure;
fig. 3 schematically shows an implementation process diagram of operation S11 according to an embodiment of the present disclosure, where (a) illustrates a scenario of counting a plurality of terminal devices playing a target video, and (b) illustrates a user event occurring during playing of the target video and a statistical table of video positions corresponding to the user event;
fig. 4 schematically shows a detailed implementation flowchart of operation S12 according to an embodiment of the present disclosure;
fig. 5 schematically illustrates an implementation process diagram of operation S12 according to an embodiment of the present disclosure;
fig. 6 schematically shows a detailed implementation flowchart of operation S13 according to an embodiment of the present disclosure;
fig. 7 schematically shows a detailed implementation flowchart of sub-operation S133 according to an embodiment of the present disclosure;
fig. 8 schematically illustrates an implementation process diagram of operation S13 according to an embodiment of the present disclosure;
fig. 9 schematically illustrates another implementation of operation S13 according to an embodiment of the present disclosure;
fig. 10 is a block diagram schematically illustrating a structure of an apparatus for generating a video preview dynamic image according to an embodiment of the present disclosure; and
fig. 11 schematically shows a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a method, a device, electronic equipment and a medium for generating a video preview dynamic image. In the method for generating the video preview dynamic image, a user event occurring in the playing process of the target video and a video position corresponding to the user event are obtained, wherein the user event comprises at least one user behavior related to the interest degree of the target video. And then, determining the event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors in the user event and the video position corresponding to the user event. And finally, determining a target video frame segment used for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image.
Fig. 1 schematically shows a system architecture of a method and apparatus for generating a video preview dynamic image according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
Referring to fig. 1, a system architecture 1 according to the embodiment may include a terminal device 10, a network 11, and a server 12. The network 11 serves as a medium for providing a communication link between the terminal device 10 and the server 12. Network 11 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal device 10 to interact with the server 12 via the network 11 to receive or send messages or the like. The terminal device 10 may have an application program installed thereon for playing video, and may also have various communication client applications installed thereon, such as a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like (for example only).
The terminal device 10 may be various electronic devices having a display screen and supporting video playing, such as a smart phone 101, a tablet computer 102, a notebook computer 103, and the like illustrated in fig. 1, and may also be a desktop computer, a smart watch, or other electronic devices.
The server 12 may be a server that provides various services, such as a background management server (for example only) that provides support for videos browsed by users using the terminal devices 10. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the method for generating a video preview dynamic image provided by the embodiment of the present disclosure may be generally executed by the server 12 or the terminal device 10. Accordingly, the apparatus for generating a video preview dynamic image provided by the embodiment of the present disclosure may be generally disposed in the server 12 or the terminal device 10. The method for generating the video preview dynamic graph provided by the embodiment of the present disclosure may also be executed by a server or a server cluster which is different from the server 12 and can communicate with the terminal device 10 and/or the server 12. Accordingly, the apparatus for generating a video preview dynamic image provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 12 and capable of communicating with the terminal device 10 and/or the server 12.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
In a first exemplary embodiment of the present disclosure, a method of generating a video preview dynamic image is provided.
Fig. 2 schematically illustrates a flow chart of a method of generating a video preview dynamic graph according to an embodiment of the present disclosure.
Referring to fig. 2, the method for generating a video preview dynamic image according to the present embodiment includes the following operations: s11, S12, and S13.
In operation S11, a user event occurring during the playing of the target video and a video position corresponding to the user event are obtained, where the user event includes at least one user behavior related to the target video interestingness.
In operation S12, an event weight distribution of the target video is determined according to the type, number, and weight of the user behavior in the user event and the video position corresponding to the user event.
In operation S13, a target video frame segment for generating a video preview dynamic image in the target video is determined according to the event weight distribution and the preview setting time length of the target video.
The operation S11 of acquiring the user event occurring in the target video playing process and the video position corresponding to the user event may be executed by the terminal device. In the process that a user plays a target video on a terminal device, a monitoring module arranged in the terminal device can monitor and record a user event in the process of playing the target video, and the recorded content comprises: the video position corresponding to the user event when the user event occurs is described as the video position corresponding to the user event.
In operation S11, the user event includes at least one user behavior related to the target video interestingness, the user behavior includes but is not limited to: and clicking like, collecting, commenting, forwarding or downloading and the like. The user event occurring during the playing process of the recorded target video and the video position corresponding to the user event may be, for example: the total time of the target video is 10min, praise occurs when the target video is played for 1min, and collection occurs when the target video is played for 8 min.
The operation S11 may also be executed by the server, where the user monitors and records a user event in the playing process of the target video by using a monitoring module built in the terminal device during the playing process of the target video by the user on the terminal device, and sends an acquisition request to the terminal device when the server needs to acquire the user event occurring in the playing process of the target video and a video position corresponding to the user event, and the terminal device may send the recorded data to the server. Or, in the process that the user plays the target video on the terminal device, the monitoring module built in the terminal device monitors and records the user event in the process that the target video is played, and the terminal device reports the statistical data within a period of time to the server regularly, stores the user event occurring in the process that the target video is played and the related data of the video position corresponding to the user event on the server, and when operation S11 is executed, the server directly obtains the user event from the storage module.
The above operation S12 may be performed by the terminal device or the server. The types of user behaviors in the user event are various, and may be set according to the degree related to the interest degree and the correlation reflecting the emphasis of the target video, for example, different user behaviors may be regarded as belonging to different types, or a plurality of user behaviors with similar interest degrees may be regarded as one type, and the like. Each kind of user behavior has respective weight, and the weights corresponding to different kinds of user behaviors are unequal. The above-mentioned weight reflects the relative importance degree of each different kind of user behavior, and the weight values corresponding to different kinds of user behaviors may be preset, or may be updated and dynamically adjusted according to actual changes.
The operation S13 of determining the target video frame segment in the target video for generating the video preview dynamic image according to the event weight distribution and the preview setting time length of the target video may be executed by the terminal device or the server.
In the method for generating the video preview dynamic graph according to the embodiment, the event weight distribution of the target video is determined according to the user event occurring in the playing process of the target video and the video position corresponding to the user event, and the event weight distribution can represent the user interest degree of each playing position of the target video, so that the target video frame segment determined based on the event weight distribution of the target video and the set time length of the preview graph is a segment in which the user is interested, most segments in which the user is interested can also reflect the core information of the video, the problem that the generated video segment cannot cover the points in which the user is interested or omit the important content of the video can be solved at least partially, and the method does not need to analyze the video content, thereby saving the resource consumption of the processing process.
Fig. 3 schematically shows an implementation process diagram of operation S11 according to an embodiment of the present disclosure, where (a) illustrates a scenario in which a plurality of terminal devices play a target video, and (b) illustrates a user event occurring during the playing of the target video and a statistical table of video positions corresponding to the user event.
According to the embodiment of the present disclosure, the operation S11 of acquiring the user event occurring in the target video playing process and the video position corresponding to the user event includes: and acquiring user events generated in at least one target video playing process monitored by M terminal devices and video positions corresponding to the user events, wherein M is more than or equal to 1.
Referring to fig. 3 (a), as M is 3, the wider the statistical range covered by the obtained user event, the more objective the corresponding distribution is, and the more important part in the video can be reflected while representing the video segment in which the user is interested. Under the technical idea of the present disclosure, the statistical data of one or more terminals may be selected, and the number of times of playing the target video may be one or more times.
Referring to fig. 3 (b), each of the 3 terminal devices monitors a user event generated during 3 times of target video playing and a video position corresponding to the user event, where the times of target video playing may occur within a period of time, such as within a day, a week, or a month. In fig. 3, the user events are illustrated by praise, collection, comment, forwarding and downloading, and the terminal device 1 is in the process of playing the target video three times, the user is in T3At the moment, praise's behavior occurs, at T8A collection action takes place at time T5At the moment, praise's behavior occurs, at T11Comment behavior happens at time T9A forwarding action takes place at a moment. In the process of playing the target video for three times by the terminal equipment 2, the user is at T4A collection action takes place at time T1Comment behavior happens at time T5A collection action takes place at time T7The downloading action takes place at that moment. In the process of playing the target video for three times by the terminal equipment 3, the user is at T6A download action takes place at time T2At the moment, praise's behavior occurs, at T10At the moment, a forwarding action takes place, at T5The collection behavior occurs all the time. The above information may be stored in the form of a mapping table.
Fig. 4 schematically shows a detailed implementation flowchart of operation S12 according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, as shown in fig. 4, the operation S12 of determining the event weight distribution of the target video according to the type, number and weight of the user behavior in the user event and the video position corresponding to the user event includes the following sub-operations: s121, S122 and S123.
In sub-operation S121, the kind and number of user behaviors are determined for each video position where a user event occurs in the target video.
In sub-operation S122, a total weight of each video position where the user event occurs is determined according to the weights for the different user behavior categories and the number of user behaviors.
In sub-operation S123, an event weight distribution of the target video is determined according to each video position where the user event occurs and the total weight of each video position.
Fig. 5 schematically illustrates an implementation process diagram of operation S12 according to an embodiment of the present disclosure. In this disclosure, the parameter corresponding to the playing position of the video is the time.
According to the user event occurring in the target video playing process and the video position corresponding to the user event, which have been acquired in operation S11, in sub-operation S121, the type and number of the user behavior may be determined for each video position. For example, continuing to describe the scene illustrated in fig. 3 (b), with respect to the play position T1And all the user events of the terminal equipment are shared to be comment behaviors. Similarly, for the playback position T2And all the user events of the terminal equipment are shared, and the behavior is complied with. For the playing position T3And all the user events of the terminal equipment are shared, and the behavior is complied with. For the playing position T4And all the user events of the terminal equipment are shared, and the collection behavior is shown. For the playing position T5The total number of user events of all the terminal devices is 3, namely 1 praise behavior and 2 collection behaviors. For the playing position T6The total number of user events of all terminal devices is 1, and the user events are downloadedAnd (6) behaviors. For the playing position T7The total number of user events of all terminal devices is 1, which is a downloading behavior. For the playing position T8And 1 user event of all the terminal devices is collected behavior. For the playing position T9The total number of user events of all the terminal devices is 1, which is a forwarding behavior. For the playing position T10The total number of user events of all the terminal devices is 1, which is a forwarding behavior. For the playing position T11And the total number of user events of all the terminal devices is 1, which is the comment behavior.
Fig. 5 illustrates weights for different user behavior categories, for example, the weight of the praise behavior is 3, the weight of the favorite behavior is 4, the weight of the comment behavior is 6, the weight of the forward behavior is 2, the weight of the download behavior is 2, the forward and download may belong to the same category, and the corresponding weights are the same. The setting of the weight is only an example, and actually, the size of the weight may be adjusted, and the relative size of the weight for each different user behavior type may also be changed.
In sub-operation S122, a total weight of each video position where the user event occurs may be determined according to the weights for different user behavior categories and the number of user behaviors, and the video positions T may be sequentially obtained1Has a total weight of 6, video position T2Has a total weight of 3, video position T3Has a total weight of 3, video position T4Has a total weight of 4, video position T5Has a total weight of 11, video position T6Has a total weight of 2, video position T7Has a total weight of 2, video position T8Has a total weight of 4, video position T9Has a total weight of 2, video position T10Has a total weight of 2, video position T11Has a total weight of 6.
In the sub-operation S123, an event weight distribution of the target video may be determined according to each video position where the user event occurs and the total weight of each video position, and an event weight distribution curve may be shown with reference to fig. 5.
Fig. 6 schematically shows a detailed implementation flowchart of operation S13 according to an embodiment of the present disclosure.
According to the embodiment of the present disclosure, referring to fig. 6, the operation S13 for determining the target video frame segment in the target video for generating the video preview dynamic image according to the event weight distribution and the preview setting time length of the target video includes the following sub-operations: s131, S132, and S133.
In sub-operation S131, a weight distribution maximum value of the target video is determined according to the event weight distribution of the target video.
In sub-operation S132, reference position points are determined according to the video position corresponding to the weight distribution maximum of the target video, where the number of the reference position points is one or more.
In sub-operation S133, a target video frame segment is determined in the target video according to the reference position point and the preview view set time length, the reference position point being within the target video frame segment.
According to the embodiment of the present disclosure, the sub-operation S132 of determining the reference position point according to the video position corresponding to the weight distribution maximum of the target video includes the following sub-operations: s132a and S132 b.
In the second sub-operation S132a, if there is a unique weight distribution maximum in the target video, it is determined that the video position corresponding to the unique weight distribution maximum is the reference position point.
In the second sub-operation S132b, if there are a plurality of weight distribution maxima in the target video, the reference position point is determined according to the distribution intensity around the weight distribution maxima of the target video and/or the size of the weight distribution maxima.
According to an embodiment of the present disclosure, the above-described sub-operation S132b of determining the reference position point according to the distribution density in the vicinity of the weight distribution maximum of the target video and/or the size of the weight distribution maximum includes one of the following three cases.
In the first case, when there are multiple weight distribution maxima, the video position corresponding to one or more T weight distribution maxima with the highest distribution density among the multiple weight distribution maxima in the target video is determined as a reference position point, and T is greater than or equal to 2.
Or, in the second case, when there are a plurality of weight distribution maxima, determining a video position corresponding to one or more S weight distribution maxima having the largest value among the plurality of weight distribution maxima in the target video as a reference position point, wherein S is greater than or equal to 2.
Or, in case of a plurality of weight distribution maxima, determining a video position corresponding to one weight distribution maximum with the highest distribution density and the largest value among the plurality of weight distribution maxima in the target video as a reference position point, or determining a video position corresponding to V weight distribution maxima with higher distribution density and larger value as a reference position point, where V is greater than or equal to 2.
The reference position point is determined according to the size and/or the distribution intensity of the weight distribution maximum value, so that the target video frame segment positioned based on the reference position point can fall in a hot spot area which is relatively interested by the user, the video preview dynamic image generated based on the target video frame segment is enabled to be in accordance with the interest points of most users, and meanwhile, the important content segment of the video can be reflected.
Fig. 7 schematically shows a detailed implementation flowchart of sub-operation S133 according to an embodiment of the present disclosure.
According to the embodiment of the present disclosure, the sub-operation S133 for determining the target video frame segment in the target video according to the reference position point and the preview picture setting duration includes the following sub-operations: s1331, S1332 and S1333.
In the next sub-operation S1331, in the case where the number of the reference position points is plural, the allocation duration for each of the plural reference position points is determined. The sum of the distribution time lengths of all the reference position points is the set time length of the preview image.
In a secondary sub-operation S1332, an alternative video frame segment corresponding to each reference location point is determined based on each reference location point and the corresponding allocated time length.
In the next sub-operation S1333, the candidate video frame segments corresponding to all the reference position points are spliced to obtain the target video frame segment.
By setting the plurality of reference position points, the interception optimization of the target video frame segment can be realized, the target video frame segment is obtained by splicing the alternative video frame segments where the plurality of reference position points are located, the comprehensive interest point orientation of the user can be reflected, so that the video preview dynamic graph generated based on the target video frame segment has variability and difference, and the requirements of various users with different preferences can be met.
Fig. 8 schematically illustrates an implementation process diagram of operation S13 according to an embodiment of the present disclosure. This embodiment exemplifies the number of reference position points as one.
The preview setting time length range is illustrated in fig. 8, and in sub-operation S131, the weight distribution maximum value of the target video may be determined according to the event weight distribution of the target video, and in the example of fig. 8, 3 weight distribution maximum values of the target video may be determined and video positions corresponding to the 3 weight distribution maximum values may be determined. Then, in the sub-operation S132 of determining the reference position point according to the video position corresponding to the weight distribution maximum of the target video, the reference position point may be determined according to the distribution density in the vicinity of the weight distribution maximum of the target video and/or the size of the weight distribution maximum, for example, the video position corresponding to the one weight distribution maximum a having the largest value among the plurality of weight distribution maxima in the determination target video shown in fig. 8 is the reference position point a, or the video position corresponding to the one weight distribution maximum B having the highest distribution density among the plurality of weight distribution maxima in the determination target video shown in fig. 8 is the reference position point B. Then, operation S133 is executed to determine a target video frame segment in the target video according to the reference position point and the preview setting time length, where the reference position point is within the target video frame segment, such as the target video frame segment a and the target video frame segment B illustrated in fig. 8, and the target video frame segment can be obtained by cutting out a video frame segment with a length equal to the setting time length of the preview in the target video and making the reference position point fall within the video frame segment. The position of the specific reference position point in the target video frame segment can be flexibly adjusted.
Fig. 9 schematically illustrates another implementation process diagram of operation S13 according to an embodiment of the present disclosure. This embodiment exemplifies that the number of reference position points is plural.
The preview setting time length range is illustrated in fig. 9, and in sub-operation S131, the weight distribution maximum value of the target video may be determined according to the event weight distribution of the target video, and in the example of fig. 9, 3 weight distribution maximum values of the target video may be determined and video positions corresponding to the 3 weight distribution maximum values may be determined. Then, in the sub-operation S132 of determining the reference position point according to the video position corresponding to the weight distribution maximum of the target video, the reference position point may be determined according to the distribution intensity in the vicinity of the weight distribution maximum of the target video and/or the size of the weight distribution maximum, for example, 2 weight distribution maximum C having a larger value among the plurality of weight distribution maxima in the determination target video illustrated in fig. 91And C2The corresponding video position is a reference position point C1And C2. Then, operation S133 is performed to determine a target video frame segment in the target video according to the reference position point and the preview setting time length. In the sub-sub operation S1331, in the case that the number of the reference location points is multiple, determining an allocation time length of each of the multiple reference location points, for example, a first allocation time length and a second allocation time length in the example of fig. 9, where the sum of the first allocation time length and the second allocation time length is equal to the preview setting time length, and the first allocation time length corresponds to the reference location point C1The second allocation duration corresponding to the reference location point C2. In a secondary sub-operation S1332, an alternative video frame segment corresponding to each reference position point, such as alternative video frame segment C illustrated in fig. 9, may be determined based on each reference position point and the corresponding allocated time length1And C2
In summary, the embodiment provides a method for generating a video preview dynamic graph, which determines event weight distribution of a target video according to a user event occurring in a playing process of the target video and a video position corresponding to the user event, where the event weight distribution can represent a user interest degree of each playing position of the target video, so that a target video frame segment determined based on the event weight distribution of the target video and a set duration of a preview graph is a segment in which a user is interested, and most segments in which the user is interested also reflect core information of the video, and a problem that a generated video segment cannot cover a point in which the user is interested or omits important content of the video can be at least partially solved.
A second exemplary embodiment of the present disclosure provides an apparatus for generating a video preview dynamic image.
Fig. 10 schematically shows a block diagram of the apparatus for generating a video preview dynamic image according to an embodiment of the present disclosure.
Referring to fig. 10, the apparatus 2 of the present embodiment includes: a user event information acquisition module 21, an event weight distribution determination module 22 and a target video frame segment determination module 23.
The user event information obtaining module 21 is configured to obtain a user event occurring in a target video playing process and a video position corresponding to the user event. The user event comprises at least one user behavior related to the target video interestingness. The user event includes at least one user behavior related to the target video interestingness, and the user behavior includes but is not limited to: and clicking like, collecting, commenting, forwarding or downloading and the like.
The event weight distribution determining module 22 is configured to determine an event weight distribution of the target video according to the category, the number, and the weight of the user behavior. The event weight distribution determination module 22 may include various sub-functional modules for implementing the methods shown in sub-operations S121, S122 and S123.
The target video frame segment determining module 23 is configured to determine a target video frame segment in the target video for generating a video preview dynamic image according to the event weight distribution and the preview setting duration of the target video. The target video frame segment determination module 23 may include respective sub-functional modules for implementing the methods shown in the sub-operations S131, S132, and S133.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the user event information acquisition module 21, the event weight distribution determination module 22, and the target video frame segment determination module 23 may be combined in one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to the embodiment of the present disclosure, at least one of the user event information obtaining module 21, the event weight distribution determining module 22 and the target video frame segment determining module 23 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware and firmware, or implemented by a suitable combination of any several of them. Alternatively, at least one of the user event information acquisition module 21, the event weight distribution determination module 22 and the target video frame segment determination module 23 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
A third exemplary embodiment of the present disclosure provides an electronic apparatus. The electronic device includes: one or more processors; and storage means for storing one or more programs. Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods described above.
Fig. 11 schematically shows a block diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 5, the electronic device 3 according to the embodiment of the present disclosure includes a processor 301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. Processor 301 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 301 may also include on-board memory for caching purposes. Processor 301 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the present disclosure.
In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 3 are stored. The processor 301, the ROM302, and the RAM 303 are connected to each other via a bus 304. The processor 301 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM302 and/or the RAM 303. Note that the program may also be stored in one or more memories other than the ROM302 and the RAM 303. The processor 301 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 3 may further comprise an input/output (I/O) interface 305, the input/output (I/O) interface 305 also being connected to the bus 304. The electronic device 3 may further comprise one or more of the following components connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 308 including a hard disk and the like; and a communication section 309 including a network interface card such as a local area network card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. A drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 as necessary, so that a computer program read out therefrom is mounted into the storage section 308 as necessary.
A fourth exemplary embodiment of the present disclosure also provides a computer-readable storage medium. The above-described computer-readable storage medium has stored thereon executable instructions that, when executed by a processor, cause the processor to implement any of the methods described above.
The computer-readable storage medium may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM302 and/or RAM 303 and/or one or more memories other than ROM302 and RAM 303 described above.
A fifth exemplary embodiment of the present disclosure also provides a computer program product. The computer program product comprises a computer program containing a program code for executing the method provided by the embodiment of the disclosure, and when the computer program product runs on an electronic device, the program code is used for causing the electronic device to implement the method for generating the video preview dynamic image provided by the embodiment of the disclosure.
For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311. The computer program, when executed by the processor 301, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 309, and/or installed from the removable medium 311. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A method of generating a video preview dynamic image, comprising:
acquiring a user event occurring in the playing process of a target video and a video position corresponding to the user event, wherein the user event comprises at least one user behavior related to the interest degree of the target video;
determining event weight distribution of a target video according to the type, the quantity and the weight of user behaviors in the user event and the video position corresponding to the user event; and
and determining a target video frame segment used for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image.
2. The method according to claim 1, wherein the acquiring a user event occurring in a target video playing process and a video position corresponding to the user event comprises:
and acquiring user events generated in at least one target video playing process monitored by M terminal devices and video positions corresponding to the user events, wherein M is more than or equal to 1.
3. The method of claim 1, wherein determining an event weight distribution of a target video according to the type, amount and weight of user actions in the user event and a video position corresponding to the user event comprises:
determining the type and the number of user behaviors aiming at each video position of a user event in a target video;
determining the total weight of each video position where the user event occurs according to the weights aiming at different user behavior types and the number of the user behaviors; and
and determining the event weight distribution of the target video according to each video position where the user event occurs and the total weight of each video position.
4. The method of claim 1, wherein the determining a target video frame segment for generating a video preview dynamic image in the target video according to the event weight distribution and the preview setting duration of the target video comprises:
determining a weight distribution maximum value of the target video according to the event weight distribution of the target video;
determining reference position points according to the video positions corresponding to the weight distribution maximum values of the target video, wherein the number of the reference position points is one or more; and
and determining a target video frame segment in the target video according to the reference position point and the set time length of the preview, wherein the reference position point is located in the target video frame segment.
5. The method of claim 4, wherein the determining a reference location point from the video location corresponding to the weight distribution maxima of the target video comprises:
if the target video has a unique weight distribution maximum value, determining a video position corresponding to the unique weight distribution maximum value as a reference position point;
and if a plurality of weight distribution maximum values exist in the target video, determining a reference position point according to the distribution intensity near the weight distribution maximum value of the target video and/or the size of the weight distribution maximum value.
6. The method of claim 5, wherein the determining a reference location point according to a distribution intensity near a weight distribution maximum of the target video and/or a size of the weight distribution maximum comprises:
under the condition that a plurality of weight distribution maximum values exist, determining a video position corresponding to one or higher T weight distribution maximum values with the highest distribution density in the plurality of weight distribution maximum values in the target video as a reference position point, wherein T is more than or equal to 2; or,
under the condition that a plurality of weight distribution maxima exist, determining a video position corresponding to one or more S weight distribution maxima with the largest value in the plurality of weight distribution maxima in the target video as a reference position point, wherein S is more than or equal to 2; or,
and under the condition that a plurality of weight distribution maximum values exist, determining a video position corresponding to one weight distribution maximum value which has the highest distribution density and the largest value among the plurality of weight distribution maximum values in the target video as a reference position point, or determining video positions corresponding to V weight distribution maximum values which have higher distribution densities and larger values as reference position points, wherein V is more than or equal to 2.
7. The method of claim 4, wherein the determining a target video frame segment in the target video according to the reference location point and the preview view set duration comprises:
under the condition that the number of the reference position points is multiple, determining the distribution duration of each reference position point in the multiple reference position points, wherein the sum of the distribution durations of all the reference position points is the set duration of the preview image;
determining alternative video frame segments corresponding to each reference position point based on each reference position point and the corresponding distribution duration; and
and splicing the alternative video frame segments corresponding to all the reference position points to obtain a target video frame segment.
8. An apparatus for generating a video preview dynamic, comprising:
the system comprises a user event information acquisition module, a target video playing module and a video playing module, wherein the user event information acquisition module is used for acquiring a user event generated in the target video playing process and a video position corresponding to the user event, and the user event comprises at least one user behavior related to the target video interestingness;
the event weight distribution determining module is used for determining the event weight distribution of the target video according to the type, the quantity and the weight of the user behaviors; and
and the target video frame segment determining module is used for determining a target video frame segment used for generating a video preview dynamic image in the target video according to the event weight distribution of the target video and the set duration of the preview image.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement the method of any one of claims 1-7.
CN202110085485.9A 2021-01-21 2021-01-21 Method, device, electronic equipment and medium for generating video preview dynamic diagram Active CN113784174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110085485.9A CN113784174B (en) 2021-01-21 2021-01-21 Method, device, electronic equipment and medium for generating video preview dynamic diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110085485.9A CN113784174B (en) 2021-01-21 2021-01-21 Method, device, electronic equipment and medium for generating video preview dynamic diagram

Publications (2)

Publication Number Publication Date
CN113784174A true CN113784174A (en) 2021-12-10
CN113784174B CN113784174B (en) 2024-07-16

Family

ID=78835489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110085485.9A Active CN113784174B (en) 2021-01-21 2021-01-21 Method, device, electronic equipment and medium for generating video preview dynamic diagram

Country Status (1)

Country Link
CN (1) CN113784174B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041356A1 (en) * 2006-03-03 2009-02-12 Koninklijke Philips Electronics N.V. Method and Device for Automatic Generation of Summary of a Plurality of Images
CN106792085A (en) * 2016-12-09 2017-05-31 广州华多网络科技有限公司 A kind of method and apparatus for generating video cover image
US9715901B1 (en) * 2015-06-29 2017-07-25 Twitter, Inc. Video preview generation
US20170243611A1 (en) * 2016-02-19 2017-08-24 AVCR Bilgi Teknolojileri A.S. Method and system for video editing
CN107872724A (en) * 2017-09-26 2018-04-03 五八有限公司 A kind of preview video generation method and device
CN112087665A (en) * 2020-09-17 2020-12-15 掌阅科技股份有限公司 Previewing method of live video, computing equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041356A1 (en) * 2006-03-03 2009-02-12 Koninklijke Philips Electronics N.V. Method and Device for Automatic Generation of Summary of a Plurality of Images
US9715901B1 (en) * 2015-06-29 2017-07-25 Twitter, Inc. Video preview generation
US20170243611A1 (en) * 2016-02-19 2017-08-24 AVCR Bilgi Teknolojileri A.S. Method and system for video editing
CN106792085A (en) * 2016-12-09 2017-05-31 广州华多网络科技有限公司 A kind of method and apparatus for generating video cover image
CN107872724A (en) * 2017-09-26 2018-04-03 五八有限公司 A kind of preview video generation method and device
CN112087665A (en) * 2020-09-17 2020-12-15 掌阅科技股份有限公司 Previewing method of live video, computing equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOYAN ZHANG; ZHIYONG WANG; DACHENG TAO; XIAN-SHENG HUA; DAVID DAGAN FENG: "Automatic Preview Frame Selection for Online Videos", 2015 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA) *
黄庆明;郑轶佳;蒋树强;高文;: "基于用户关注空间与注意力分析的视频精彩摘要与排序", 计算机学报, no. 09 *

Also Published As

Publication number Publication date
CN113784174B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
JP2020173778A (en) Method, apparatus, electronic facility, computer readable medium, and computer program for allocating resource
CN104067274A (en) System and method for improving access to search results
CN110619100B (en) Method and apparatus for acquiring data
CN113411400B (en) Information calling method and device, electronic equipment and readable storage medium
CN113505302A (en) Method, device and system for supporting dynamic acquisition of buried point data and electronic equipment
CN108600780B (en) Method for pushing information, electronic device and computer readable medium
US9560110B1 (en) Synchronizing shared content served to a third-party service
US20240220081A1 (en) Template selection method, electronic device and non-transitory computer-readable storage medium
CN116627333A (en) Log caching method and device, electronic equipment and computer readable storage medium
CN117076280A (en) Policy generation method and device, electronic equipment and computer readable storage medium
CN114840379A (en) Log generation method, device, server and storage medium
CN112016280B (en) File editing method and device and computer readable medium
US9910737B2 (en) Implementing change data capture by interpreting published events as a database recovery log
CN116701123A (en) Task early warning method, device, equipment, medium and program product
CN114465919B (en) Network service testing method, system, electronic equipment and storage medium
CN116756016A (en) Multi-browser testing method, device, equipment, medium and program product
CN112306826A (en) Method and apparatus for processing information for terminal
CN113784174B (en) Method, device, electronic equipment and medium for generating video preview dynamic diagram
CN113542185B (en) Method and device for preventing hijacking of page, electronic equipment and storage medium
CN113760315A (en) Method and device for testing system
CN113761343A (en) Information pushing method and device, terminal equipment and storage medium
CN113761433A (en) Service processing method and device
CN112988806A (en) Data processing method and device
CN115333871B (en) Firewall operation and maintenance method and device, electronic equipment and readable storage medium
CN115312208B (en) Method, device, equipment and medium for displaying treatment data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant