WO2016038522A1 - Sélection et présentation de trames représentatives pour des prévisualisations vidéo - Google Patents

Sélection et présentation de trames représentatives pour des prévisualisations vidéo Download PDF

Info

Publication number
WO2016038522A1
WO2016038522A1 PCT/IB2015/056783 IB2015056783W WO2016038522A1 WO 2016038522 A1 WO2016038522 A1 WO 2016038522A1 IB 2015056783 W IB2015056783 W IB 2015056783W WO 2016038522 A1 WO2016038522 A1 WO 2016038522A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
frame
frames
semantic
representative
Prior art date
Application number
PCT/IB2015/056783
Other languages
English (en)
Inventor
Sanketh Shetty
Tomas IZO
Min-Hsuan Tsai
Sudheendra Vijayanarasimhan
Apostol Natsev
Sami Abu-El-Haija
George Toderici
Susanna Ricco
Balakrishnan Varadarajan
Nicola MUSCETTOLA
Weihsin Gu
Weilong Yang
Nitin Khandelwal
Phuong Le
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Priority to EP15839919.6A priority Critical patent/EP3192273A4/fr
Priority to CN201580034616.3A priority patent/CN107077595A/zh
Publication of WO2016038522A1 publication Critical patent/WO2016038522A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7834Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using audio features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/10Recognition assisted with metadata

Definitions

  • This disclosure generally relates to presenting representative video summaries to a user, and specifically to selecting representative video summaries using semantic features.
  • Video hosting systems store and serve videos to client devices. As these video hosting systems become increasingly popular, the video hosting systems increasingly store longer-form videos, sometimes exceeding several hours in length. These longer-form videos may show a wide variety of topics and settings and depict many different scenes and objects within the video. For example, a wildlife video titled "Animals of the Serengeti" may show many different animals, such as lions, gazelles, elephants, and hyenas. These animals may be shown in a wide variety of settings, such as when grazing, migrating, or during a chase. When users browse videos, the video hosting service provides some portion of a video as a preview of the video, such as a single frame from the beginning of the video.
  • this preview typically fails to accurately represent the full content of the video and a user is not able to quickly distinguish whether a particular video has desired content without watching the video itself.
  • this preview may show a frame of a lion resting, but the user would not be able to determine that the video also includes migrating gazelle without watching the video.
  • a video hosting service presents representative frames from a video to a user in a preview of the video. This permits a user to receive additional context about the video and determine whether to select that video to view.
  • the video hosting service analyzes videos received by the video hosting service to generate features describing individual frames within a video. Such features include low-level information describing the frame, such as color, motion, and audio features, as well as semantic features predicting the presence of various concepts within the frame. Such concepts identified in the frame include, for example, that the frame includes a particular type of object ("lion") or an action ("hunt").
  • the video hosting service identifies segments within the video based on the features of the video. Each segment identifies a portion of consecutive frames of the video that are to be summarized together. In one embodiment each segment is determined by identifying shot boundaries in the video. After identifying a set of segments, the video hosting system analyzes each segment and identifies a representative frame that may be used to summarize that segment to a user. To identify a representative frame, the video hosting system determines which semantic concepts are within the segment and scores each frame within the segment according to a likelihood it contains semantic concepts of the segment. In one embodiment, a score combines scores from multiple semantic concepts of the frame, which may permit frames that include multiple concepts of the segment to receive a higher score than frames that include a single concept of the segment.
  • the score for each frame may also include an aesthetic score for the frame indicating its photo quality.
  • the frame in a segment with the highest score is selected as the representative frame for the segment.
  • the photo quality may be measured by sharpness, contrast, and so forth.
  • the semantic score is combined with the aesthetic score to determine a total score for the frame.
  • the frame in a segment with the highest total score is selected as a representative frame for that segment.
  • segments of a video are identified by one or more different segmenting techniques.
  • the segments identified by each technique are termed a segment set, and the segments in the segment sets may be overlapping portions of the original video.
  • the video may be segmented in multiple different ways by the various segment sets.
  • a representative frame for each of the segments of each segment set is determined.
  • the video hosting system identifies representative frames for the video based on the techniques used for segmenting the video, increasing the likelihood that the representative frames capture alternative portions of the video.
  • the segments and associated representative frames are stored as entries in a segment table. The entries indicate the portion of the video of the segment, the representative frame of the segment, and the concepts associated with that representative frame.
  • the video hosting system receives a request to summarize a video.
  • the request to summarize the video may be based on a user browsing videos in the video hosting system, or may be based on a search query associated with the request.
  • the video hosting system identifies segments in the segment table that are relevant to the request by comparing the semantic concepts of the segments with semantic concepts associated with the request.
  • the semantic concepts associated with the request are determined by analysis of a search query, user interest information, or by identifying semantic concepts associated with metadata of the video. When there has not been a search, in some embodiments all segments in the segment table are treated as relevant.
  • a set of representative segments is selected.
  • One or more representative segments can be selected.
  • the relevant segments are scored based on the match between the relevant segment and the semantic concepts associated with the query.
  • a set of representative segments is selected from among the relevant segments to summarize the video.
  • the video hosting system selects segments that have the highest score and that reflect diversity among the semantic concepts associated with the selected segments.
  • the representative frames associated with the selected segments are used to generate a summary of the video.
  • the summary chronologically combines the representative frames and presents a series of the frames to the user.
  • the video summary is provided to the user who may determine whether to view the entire video.
  • the user may also be presented with indications of relevant segments or representative frames while the video plays.
  • the representative frames may be selected in various ways, such as by matching the search query or user profile to the semantic concepts in the segments of the viewed video.
  • representative frames for additional segments are shown adjacent to the video that is being played.
  • one or more markers are displayed on a timeline of the video that is being played. These markers indicate when the representative frames for various segments occur.
  • the representative frames are associated with relevant segments and are selected based on a user's context (e.g., the profile or search query), the representative frames are likely to indicate frames of particular interest to the user. By displaying these frames while playing the video, the video hosting system permits a user to easily identify portions of the video that are particularly likely to be of interest to the user without manually seeking portions of the video.
  • FIG. 1 is a block diagram of an example video hosting service in which video previews are generated using semantic features of video segments, according to one embodiment.
  • FIG. 2 illustrates the segmentation of a video and selection of a representative frame, according to one embodiment.
  • FIG. 3 illustrates the generation of a segment table indicating representative frames for a segments of a video, according to one embodiment.
  • FlG. 4 illustrates a method for identifying representative frames, according to one embodiment.
  • FIG. 5 shows a method for selecting representative frames from a segment table for display to a user, according to one embodiment.
  • Fig. 6 shows a video preview interface including representative frames of a video, according to one embodiment.
  • FIG. 7 A shows another video preview interface including representative frames of a video, according to one embodiment.
  • FIGS. 7B-7D show further interfaces for presenting representative frames of a video, according to various embodiments.
  • FIG. 8 shows an interface for providing a representative frame within a player interface, according to one embodiment.
  • FIG. 9 shows an interface for providing representative frames for a video with a player interface, according to one embodiment.
  • FIG. 1 is a block diagram of a video hosting service 100 in which video previews are generated using semantic features of video segments.
  • a video preview is a portion of a video, such as a frame, set of frames, animation, or other summary of the video that may be displayed to the user for the user to determine the content of the video. The user may use the preview to determine whether to request the video to view.
  • the video hosting service 100 stores and provides videos to clients such as the client device 135.
  • the video hosting site 100 communicates with a plurality of content providers 130 and client devices 135 via a network 140 to facilitate sharing of video content between users. In FlG. 1, for the sake of clarity only one instance of content provider 130 and client 135 is shown, though there could be any number of each.
  • the video hosting service 100 includes a front end interface 102, a video serving module 104, a video search module 106, an upload server 108, a user database 114, a video repository 116, and a feature repository 118.
  • the video hosting service 100 also includes components for selecting and serving representative previews of a video, such as feature extraction module 120, video
  • segmentation module 122 segmentation module 122, frame selection module 124, and video summary module 126.
  • Other conventional features of the video hosting service 100 such as firewalls, load balancers, authentication servers, application servers, failover servers, and site management tools are not shown so as to more clearly illustrate the features of the video hosting site 100.
  • the illustrated components of the video hosting website 100 can be implemented as single or multiple components of software or hardware. In general, functions described in one embodiment as being performed by one component can also be performed by other components in other embodiments, or by a combination of components. Furthermore, functions described in one embodiment as being performed by components of the video hosting website 100 can also be performed by one or more client devices 135 in other embodiments if appropriate.
  • Client devices 135 are computing devices that execute client software, e.g., a web browser or built-in client application, to connect to the front end interface 102 of the video hosting service 100 via a network 140 and to display videos.
  • client software e.g., a web browser or built-in client application
  • the client device 135 used in these embodiments include, for example, a personal computer, a personal digital assistant, a cellular, mobile, or smart phone, or a laptop computer.
  • the network 140 is typically the Internet, but may be any network, including but not limited to a LAN, a MAN, a WAN, a mobile wired or wireless network, a cloud computing network, a private network, or a virtual private network.
  • Client device 135 may comprise a personal computer or other network-capable device such as a personal digital assistant (PDA), a mobile telephone, a pager, a television “set-top box,” and the like.
  • PDA personal digital assistant
  • the content provider 130 provides video content to the video hosting service 100 and the client 135 views that content.
  • content providers may also be content viewers. Additionally, the content provider 130 may be the same entity that operates the video hosting site 100.
  • the content provider 130 operates a client device to perform various content provider functions.
  • Content provider functions may include, for example, uploading a video file to the video hosting website 100, editing a video file stored by the video hosting website 100, or editing content provider preferences associated with a video file.
  • the client device 135 is a device operating to view video content stored by the video hosting site 100. Client device 135 may also be used to configure viewer preferences related to video content. In some embodiments, the client device 135 includes an embedded video player such as, for example, the FLASH player from Adobe Systems, Inc. or any other player adapted for the video file formats used in the video hosting website 100.
  • client and content provider as used herein may refer to software providing both client and content providing functionality, to hardware on which the software executes.
  • a “content provider” also includes the entities operating the software and/or hardware, as is apparent from the context in which the terms are used.
  • the upload server 108 of the video hosting service 100 receives video content from a client devices 135. Received content is stored in the video repository 116. In response to requests from client devices 135, a video serving module 104 provides video data from the video repository 116 to the client devices 135. Client devices 135 may also search for videos of interest stored in the video repository 116 using a video search module 106, such as by entering textual queries containing keywords of interest. The video search module 106 may request a preview of any videos in the search results from the video summary module 126 as further described herein.
  • Front end interface 102 provides the interface between client 135 and the various components of the video hosting site 100. In particular, the front end interface 102 provides a video preview interface to a user to permit a user to review videos in a summary format prior to viewing an interface displaying the full video itself.
  • the user database 114 is responsible for maintaining a record of all registered users of the video hosting server 100.
  • Registered users include content providers 130 and/or users who simply view videos on the video hosting website 100.
  • Each content provider 130 and/or individual user registers account information including login name, electronic mail (e-mail) address and password with the video hosting server 100, and is provided with a unique user ID.
  • This account information is stored in the user database 114.
  • the user database 114 may also store user interests associated with users. The user interests may be determined by prior videos viewed by the user or by interests entered by the user, or by user activity on other sites besides the video hosting service 100.
  • the video repository 116 contains a set of videos 117 submitted by users.
  • the video repository 116 can contain any number of videos 117, such as tens of thousands or hundreds of millions.
  • Each of the videos 117 has a unique video identifier that distinguishes it from each of the other videos, such as a textual name (e.g., the string "a91qrx8"), an integer, or any other way of uniquely naming a video.
  • the videos 117 can be packaged in various containers such as AVI, MP4, or MOV, and can be encoded using video codecs such as MPEG-2, MPEG-4, WebM, WMV, H.263, H.264, and the like.
  • the videos 117 further have associated metadata 117A, e.g., textual metadata such as a title, description, and/or tags.
  • the video metadata 117A also stores a segment table maintaining an identification of segments of the video. Each segment indicates a set of sequential frames that belong to the same shot of video. The segments are also stored in the segment table with an indication of the start and stop time of the segment, in addition to a representative frame of the segment.
  • the representative frame is a frame from the segment that was selected to be displayed to summarize the segment in a preview. For example, the segment may be identified as beginning at 4:25 and ending at 8:05, with an identified representative frame of 4:45. When this segment is used to summarize a video, the representative frame of 4:45 is used to summarize that segment as further described herein.
  • each segment in the segment table is identified as including one or more semantic concepts.
  • a features repository 118 stores, for videos of the video repository 116, associated sets of features that characterize the videos with respect to one or more types of visual or audio information, such as color, motion, and audio information.
  • the features of a video 117 are distinct from the raw content of the video itself and are derived from it by a feature extraction module 120.
  • the features are stored as a vector of values, the vector having the same dimensions for each of the videos 117 for purposes of consistency.
  • the features extracted using the feature extraction module 120 in one embodiment are visual low-level frame-based features.
  • one embodiment uses a color histogram, histogram of oriented gradients, color-differencing with adjacent frames, motion features, and feature tracking, though other frame-based features can be used.
  • the features extracted are collected on a per-frame basis and could comprise other frame-based features such as an identified number of faces or a histogram of oriented optical flow, and may comprise a combination of extracted features.
  • a Laplacian-of-Gaussian (LoG) or Scale Invariant Feature Transform (SIFT) feature extractor a color histogram computed using hue and saturation in HSV color space, motion rigidity features, texture features, filter responses (e.g. derived from Gabor wavelets), including 3D filter responses, edge features using edges detected by a Canny edge detector, gradiant location and orientation histogram (GLOH), local energy-based shape histogram (LESH), or speeded-up robust features (SURF).
  • Additional audio features can also be used, such as volume, an audio spectrogram, speech-no-speech indicators, or a stabilized auditory image.
  • the features may also include intermediate layer outputs of a deep neural network trained for a variety of image and video recognition, classification, or ranking tasks.
  • the features are reduced.
  • the feature reduction is performed in one embodiment using a learned linear projection using principal component analysis to reduce the dimensionality of the feature vectors to 50, or some other suitable number less than 100.
  • Other embodiments can use additional techniques to reduce the number of dimensions in the feature vectors when desired.
  • the feature extraction module 120 may also include a plurality of semantic classifiers to determine semantic features relating to a set of semantic concepts.
  • a semantic concept is a label assigned to the content of a video or frame, and may be correspond to an entity, such as "dog” or “cat” or free text, such as "dog chasing cat.”
  • the set of semantic concepts varies by implementation, and may include, for example, 25,000 concepts.
  • the semantic classifiers are computer models that receive a designation of a frame and features thereof and output a likelihood that the frame is relevant to or depicts a particular semantic concept. For example, a semantic classifier for the semantic concept "dog” determines a likelihood that the frame contains the semantic concept "dog.” The likelihood may be determined within a range, for example between 0 and 1.
  • This likelihood that the frame contains the semantic concept is stored as a semantic feature of the frame.
  • Each semantic concept is associated with a semantic classifier, and the feature extraction module 120 applies the semantic classifiers to determine semantic features for the set of semantic concepts.
  • a set of semantic features is generated for each of the semantic concepts using the semantic classifiers, and the set of semantic features is associated with each frame in the video and stored in feature repository 118.
  • Semantic classifiers may also be used to determine the semantic concepts present in a video as a whole or for a particular segment or portion of a video.
  • the semantic classifiers are trained by a classifier training module (not shown) that trains a semantic classifier using supervised data (e.g., a specific human designation that a frame or video belongs to the semantic concept) or by inferring labels from data associated with the video (e.g., metadata of the video).
  • supervised data e.g., a specific human designation that a frame or video belongs to the semantic concept
  • inferring labels from data associated with the video e.g., metadata of the video.
  • the video segmentation module 122 identifies segments of the video. To identify segments in the video, the video segmentation module 122 analyzes the visual and audio features of the frames in the video. The video segmentation module 122 may apply one or a combination of different techniques for determining shot boundaries within a video. In some embodiments, multiple of these methods are applied to identify more than one set of segments in the video.
  • the video segmentation module 122 may use classifiers to identify video segments.
  • the classifier is trained using labeled shot boundaries as a positive feature set and frames near the boundary as a hard-negative training set.
  • the features of a frame analyzed by this classifier may include color differences with adjacent frames, motion features, audio volume, and audio speech detection.
  • the video segmentation module 122 applies the classifier to frames of the video to determine whether each frame is a shot boundary.
  • the video segmentation module 122 identifies segments of videos by using coherence of the frame features.
  • the coherence measures similarity of features in a predetermined temporal segment.
  • the predetermined temporal segment is a short segment of video for measuring similarities between the frames.
  • This similarity provides a distance measure to an unsupervised clustering / segmentation algorithm, such as agglomerative clustering, affinity propagation, or spectral clustering.
  • the output of this algorithm identifies segments of the video.
  • the video segmentation module 122 may identify video segments by tracking visual features across frames.
  • the video segmentation module 122 identifies a frame as a segment boundary when more than a threshold number or fraction of features change between those frames including the frame.
  • the video segmentation module 122 may use one or combination of the techniques described above to identify video segments. Subsequently, the video segmentation module 122 provides the identified segments to the frame selection module 124.
  • the frame selection module 124 identifies, for each video segment, a representative frame to represent and summarize the video segment.
  • the representation frame is a frame that is most representative of the concepts in the video segment.
  • the frame selection module 124 scores the frames of the segment according to the semantic features of the frames and compares the semantic features of the frames to those of the video segment.
  • the frame selection module 124 may also generate an aesthetic score associated with the frames and generate a combined score for a frame.
  • the combined score for a frame accounts for the semantic score and the aesthetic score. From among the combined scores of the frames for a segment, the frame selection module 124 selects the frame with the highest score as the representative frame for the video segment.
  • the frame selection module 124 identifies the semantic concepts present in the video segment by identifying semantic concepts in each frame. Semantic concepts in a frame are added to a set of semantic concepts for the video segment when the semantic feature for the concept in a frame is higher than a threshold, such as 40, 50, or 60% likelihood of the semantic concept being present in the frame. For each of the semantic concepts identified in the segment, the frame selection module 124 determines a score for that concept in the frame by determining the amount that the concept is present in the frame compared to a reference value.
  • the reference value may be the mean, median, minimum, or maximum value semantic feature of the concept in the frames of the segment, or may be zero.
  • the frame selection module 124 sums the scores for each concept to generate a semantic score for each frame. By summing the scores for each concept present in the segment, a frame that includes multiple concepts in the segment is more likely to be selected as the representative frame for the segment. For example, a segment that depicts a lion chasing a gazelle includes some frames depicting only the lion, some depicting only the gazelle, and some depicting a combination of the lion and gazelle. In this example, the frames depicting both the lion and the gazelle receive a semantic score that accounts for the presence of both the lion and gazelle.
  • calculating semantic scores for a frame includes a linear combination of semantic concepts represented by labels and likelihood of the semantic concepts in a given frame.
  • the semantic score S for a frame f is determined according to Equation (1):
  • semantic score S(f) sum_c (concept_segment(c) * likelihood (c, f)) (1), where sum_c indicates a sum for each semantic concept in the segment, concept_segment(c) indicates how salient a semantic concept is to the segment (e.g., a mean likelihood over all frames in the segment), and likelihood (c,f) is the likelihood of the semantic concept c in the frame f (the concept score for this particular frame).
  • the semantic score S sums, for each semantic concept in the segment, prevalence of the semantic concept in the segment multiplied by the likelihood of the semantic concept in the frame. Accordingly, the semantic score for a frame emphasizes the frames of which the semantic concepts (represented by likelihood (c,f)) are prevalent throughout the video segment (represented by
  • the scoring also includes aesthetic scores to assist in selection of a representative frame that is also aesthetically pleasing.
  • the aesthetic score is determined for each frame and determined using individual qualities, such as the amount of motion, sharpness, distance from the segment boundary (e.g., the first and last frame of the segment), and photo quality.
  • Each of these aesthetic qualities is combined to determine an aesthetic score for the frame, and may be combined using a machine learned model, by summation, or by another means.
  • the frame selection module 124 combines the semantic score and aesthetic score to generate a combined score for each frame which is used to identify the frame selected as representative for the segment.
  • the scores may be normalized prior to combination, and the combination may be based on a computer-learned model, or may be a summation of scores.
  • a function may be computed for the semantic and aesthetic scores, for example the average, maximum, minimum, noisy-or, or k-noisy-or. These functions can be computed on normalized or unnormalized values of the signals.
  • the normalization e.g. mapping scores to 0-1
  • the frame selection module 124 determines the combined score in one embodiment by applying a computer-learned model that receives the aesthetic score and semantic score as inputs.
  • the computer-learned model may be trained in various ways, for example using pairwise data (frame x is better than frame y) or using regression (frame x has score s).
  • the model may also be performed using scores that are not normalized as described above.
  • the frame selection module 124 After determination of the combined score for each frame in the segment, the frame selection module 124 ranks the frames in the segment according to the combined scores. The frame selection module 124 selects the highest-ranked frame (i.e., the frame with the highest combined score) as the representative frame for that segment. In one
  • the frame selection module 124 selects a representative frame using only the highest semantic score from among the frames.
  • the frame selection module 124 may also select representative audio for the frame using similar techniques and select a portion of audio spanning several frames.
  • the representative audio may be selected from the audio at the frames surrounding the selected representative frame.
  • the frame selection module 124 selects a representative frame
  • the representative frame is stored with the segment designation in a segment table associated with the video.
  • the semantic concepts associated with the representative frame may also be stored in the segment table.
  • the frame selection module 124 receives multiple sets of segments from the video segmentation module 122. The multiple sets of segments are determined by using different methods of segmenting the video. Each of these sets of segments may be stored with the segment table and with an associated representative frame for each segment.
  • the representative frame selection and video segmentation is performed by the frame selection module 124 and video segmentation module 122 prior to a video being provided to a client device 135 for viewing.
  • the representative frames may be identified when a new video is received by the upload serve 108.
  • the segment table is available to identify representative frames for display prior to user requests.
  • FIG. 2 illustrates the segmentation of a video and selection of a representative frame, according to one embodiment.
  • the segmentation and selection of a representative frame is performed as described above by the components of the video hosting service 100.
  • Video 200 is segmented into a set of segments 210 by the video segmentation module 120.
  • Each of the segments includes a chronological set of frames 220, shown here as frames Fi- F 7 .
  • Each of the frames is associated with a set of semantic features identified by the feature extraction module 120.
  • the illustrated segment is a segment showing a lion chasing a gazelle.
  • the frames depict a lion, then at frame F 3 and F 4 a gazelle is shown, and a lion begins chasing the gazelle at F 5 and are both in-frame and identified in F6, and the lion alone is identified in F 7 .
  • these semantic features identify a likelihood of a semantic concept being present in a frame, and while displayed here as "present,” the semantic concepts may only indicate that a particular concept, e.g., "lion” is likely or highly likely present in a frame or may include a floating point likelihood or probability of the concept occurring in the frame.
  • the frame selection module 124 selects frame F 6 as the representative frame in this segment. When scoring the frames, the frame selection module 124 identifies that the semantic concepts associated with the segment are "lion" and
  • Frame F 6 receives a score for each concept and a total semantic score accounting for each. After optionally generating a combined score accounting for an aesthetic score, Frame F 6 is selected as the representative frame 230. In practice, multiple frames are likely to include the concepts "lion” and "gazelle.”
  • Incorporating the aesthetic score may assist in identifying which of these frames is aesthetically most pleasing to a user.
  • FIG. 3 illustrates the generation of a segment table indicating representative frames for video segments of a video according to one embodiment.
  • a video 300 includes a variety of animals.
  • the video is analyzed by the video segmentation module 122 using several methods of identifying video segments, which yields identified video segment sets 3 lOA-C.
  • a representative frame 315 is identified by the frame selection module 124 as described above. Since the various methods of segmentation may identify different boundaries within the video 300, different representative frames may be selected for the various segments, as shown.
  • the segments and representative frames are stored in a segment table 320, which identifies the segments, a representative frame for each segment, and a set of semantic concepts associated with the representative frame.
  • FlG. 4 illustrates a method for identifying representative frames according to one embodiment. This method is performed by the feature extraction module 120, video segmentation module 122, and frame selection module 124 in the embodiment described with respect to FIG. 1.
  • a video is received 400 for identification of representative frames.
  • the video may be received for identification of representative frames responsive to the video being uploaded to the video hosting service 100, or may be received at another time after upload.
  • Features are identified 410 for the video as described above, including frame-based features and semantic features identifying semantic concepts present in the frame.
  • the semantic features may be determined from the frame-based features, for example, by applying semantic classifiers to frame-based features identified for a frame to determine one or more semantic features for the frame.
  • the video features are analyzed by the video segmentation module 122 to generate video segments 420, which may include multiple sets of segments as determined by multiple video segmentation methods.
  • the frames in the segment are scored 430 to generate semantic scores.
  • the semantic score includes a combined score incorporating an aesthetic score of the frame.
  • a representative frame for a segment is selected 440 from the frame with the highest score.
  • the identified segments and representative frames may be added to a segment table for the video.
  • the video summary module 126 uses the segment table to generate a preview of a video for a user.
  • the video preview is used to generate a
  • storyboard of a video to depict representative frames of the video, and may select representative frames that are related to a search query provided by a user or related to interests of a user.
  • the video summary module 126 receives a request to generate a preview of a video.
  • the request may be provided from the front end interface when a user browses videos on the video hosting service 100, or may be provided from the video search module 106 to generate a preview for results of a search for a video.
  • the request to generate a preview indicates a video for which to generate a preview, and may include a search query or an identification of a requesting user in the user database 114.
  • the video summary module 126 After receiving a request to summarize the video, the video summary module 126 identifies segments of the video relevant to the request. When no search query is received, all segments may be considered relevant. Alternatively, the metadata associated with the video, e.g., the title and any tags associated with the video, may be selected as relevance terms to use for determining relevance of the segments and representative frames.
  • the search query is translated into relevance terms to identify relevance terms to analyze the videos and identify which semantic concepts are described by the search query.
  • an identified requesting user may be associated with interests in the user database 114.
  • the various relevance terms are translated into semantic concepts to determine relevance of segments of the video.
  • the translated relevance terms are compared to the semantic concepts associated with the representative frames of the segments of the video.
  • the video summary module 126 identifies representative frames including concepts that match the semantic concepts of the relevance terms as relevant segments and uses these segments as potential segments to generate a preview of the video.
  • the video summary module 126 After identifying relevant segments of the video, the video summary module 126 identifies which relevant segments (and representative frames) will be used to generate a preview of the video. To select representative frames for the preview, the video summary module 126 generates a relevance score for the representative frames for a segment. The relevance score is calculated using the metadata, query, or user interests relative to the semantic features of the representative frame. This relevance score matches the semantic features of the metadata, query, or user interests to the semantic features. The relevance scores are ranked, and a highest-ranked relevance score is selected as a representative frame.
  • the semantic concepts of the selected frames can be used in selection of other representative frames. In one application, the selection of frames emphasizes diversity of semantic concepts among the selected frames. For example, frames with different semantic concepts as already selected frames may be preferred to frames with similar semantic concepts. A designated number of representative frames are selected to represent the video, such as 3 or 5.
  • the selected frames may also be chronologically organized for display to the user.
  • the video summary module 126 generates a video summary by generating an animation using the selected representative frames.
  • the animation provides a brief overview to the user of the representative frames for the video and permits the user to quickly determine whether the user is interested in the video.
  • the video summary module also determines whether to replace a default thumbnail for a video based on the selected representative frame.
  • Each video may be associated with a default thumbnail, which may be designated by a user uploading the video or may be selected based on semantic of aesthetic features of the video.
  • the video summary module 126 determines whether to replace the default thumbnail in some embodiments by comparing a relevance score of the selected representative frames to a relevance score calculated with respect to the default thumbnail. The relevance scores may be calculated with respect to the video metadata, search query, or user interests as described above. When the representative frame relevance score is higher than the default thumbnail by a threshold value, the representative frame is selected as a replacement thumbnail for display.
  • the query terms and user interests are incorporated into the scoring for the selection of representative frames for the preview and increase the scoring for representative frames in the preview, and do not affect the representative segments selected as relevant. That is, the semantic concepts associated with the query or user interests are used to increase the score of representative frames that match the semantic concepts of the query or user interests.
  • FIG. 5 shows a method for selecting representative frames from a segment table for display to a user according to one embodiment.
  • this method is performed by the video summary module 126.
  • a request is received 500 to summarize a video and provide a preview to a user.
  • the request may designate a search query and/or a user requesting the video.
  • segments that are relevant to the request are identified 510 based on the search query, user interests of the user requesting the video, or metadata associated with the video.
  • the segments may be identified by comparing the semantic concepts associated with the segments to the semantic concepts associated with the request.
  • the semantic concepts associated with the segments may be identified from a segment table including segments of videos, representative frames for the segments, and associated semantic concepts for the segments.
  • the semantic concepts associated with the request may be determined by analyzing the search query or user interest information, or by identifying semantic concepts associated with metadata of the video.
  • Representative segments are selected 520 from the segments determined to be relevant to the request.
  • the segments that are relevant to the request are scored and selected based on relevance to the video metadata and the user's context (e.g., the user's search query or user interests). For example, the segments relevant to the request are scored based on the match between the segment and the semantic concepts associated with the query. The segments with the highest score and reflecting a diversity of semantic concepts are selected.
  • the representative frames associated for the selected representative segments can be determined from the segment table.
  • the video summary module 126 generates a video summary 530 using the representative frames for the selected representative segments.
  • the video summary chronologically combines the representative frames and may present a series of the representative frames to the user, for example, in a static "storyboard” or by combining the frames into an animation that sequentially transitions from one frame to another.
  • the video summary is provided to the user who determines whether or not to view the entire the video.
  • FIG. 6 shows a video preview interface 600 including representative frames of video according to one embodiment.
  • the video preview interface 600 is provided to a client device 135 for browsing videos and determining whether to view a video in full based on the video preview.
  • a user entered a search query of "corvette unveiling" and several videos were determined as responsive to the request.
  • the search query and resulting videos are provided to the video summary module 126 for selection of representative frames and a preview of the videos.
  • a set of three videos 610A-610C is selected as relevant in a first portion of the display.
  • Each of the relevant videos is analyzed to determine representative frames and a relevance score for each representative frame.
  • the relevance score may be determined as described above to identify frames relevant to the search query or user profile.
  • the video summary module 126 selects a representative frame 620 to accompany a video in the display when the representative frame exceeds a threshold relevance score.
  • the video summary module 126 selects the frame to present on video preview interface 600 that has the highest relevance score over the threshold relevance score.
  • videos 610B and 6 IOC did not have a representative frame with a relevance score higher than the threshold relevance score, and are not shown in the preview interface with a representative frame 620.
  • a scene preview 630 is displayed to the user.
  • the scene preview 630 may be shown in addition to the relevant videos 610A-C, or may be shown a separate interface or display.
  • the scene preview 630 displays a thumbnail 640 of the relevant search results.
  • the default thumbnail is replaced with a representative frame for each video.
  • each displayed thumbnail 640A-C are the representative frames for each of the search results that has the highest relevance score.
  • the video summary module 126 generates relevance scores for each segment in the relevant videos and selects the highest-scoring representative frame. The representative frame replaces the default thumbnail image for display in the scene preview 630.
  • the scene preview 630 presents each video summarized by the representative frame that best summarizes the video relative to the search query entered by the user.
  • the user may be shown the video and playback of the video begins at the representative frame, permitting the user to jump to the representative frame in the video.
  • selecting the representative frame begins playback at the beginning of the segment containing the representative frame.
  • the relevance score may also account for the user profile and other information to determine the relevance score.
  • an interface element 650 permits a user to view additional videos summarized by representative frames. This interface element 650 provides the user with additional search results that also have default thumbnails replaced with query- or user-specific representative frames.
  • FIG. 7A shows another video preview interface 700 including representative frames of a video according to one embodiment.
  • the video preview interface 700 is provided to a client device 135 for browsing videos and determining whether to view a video in full based on the video preview.
  • a user entered a search query of "bulldog skateboarding" and several videos were determined as responsive to the request.
  • the search query and resulting videos are provided to the video summary module 126 for selection of representative frames and a preview of the videos.
  • representative frames, 71 OA, 710B, and 7 IOC is provided to the user as a preview of the respective videos. That is, rather than selecting a single representative frame as shown in FIG. 6, in this embodiment multiple frames of a video may be selected and presented to a user. This permits the user to determine which of the videos and a particular representative frame within the video that the user would like to view. When a user selects a representative frame, the user may be shown the video and playback of the video begins at the
  • the video hosting system 100 can determine representative frames for the video preview interface 700 without significant frame-by-frame processing at the time of the search query.
  • FIGS. 7B-7D show further interfaces for presenting representative frames of a video according to various embodiments.
  • a representative frame 710 may be designated or highlighted by the video hosting service 100 as particularly relevant to the user or the user's search, in this example "elephant" or "Namibia elephant.”
  • representative frames 710D and 710E are highlighted, by an outline in
  • the video summary module 126 determines the set of representative frames for the user and generates the relevance score associated with the representative frames.
  • the representative frames are ranked by the relevance score, and the highest-ranked representative frame is identified and presented to the user with a highlight.
  • the representative frames are shown here as ordered chronologically, but may also be ordered according to the relevance score of the representative frames.
  • FIG. 7C shows a selection of representative frames for a video.
  • the video preview interface 700 includes a timeline 720 or progress bar 730 that indicate when in a video the particular representative frames occur.
  • FIG. 7D illustrates another video preview interface 700 in which the representative frames 710 are displayed in a grid configuration.
  • FIG. 8 shows an interface for providing a representative frame within a player interface 800 according to one embodiment.
  • the player interface 800 is the interface that a user interacts with to play the video and adjust controls for the video, such as volume, start, stop, seek, and other actions.
  • the player interface 800 also includes a progress bar 805 that indicates the length of the video and the portion of the video that has been viewed.
  • the video summary module 126 identifies one or more representative frames which may be indicated within the player interface 800. In this example, the time in the video at which a
  • the representative frame 815 is displayed to the user, which may also include a description of the semantic concepts or actions identified for the representative frame.
  • the user's interaction used to display the representative frame varies in different implementations, and may be a user's cursor detected at the position of the marker 810 for more than a threshold period of time (e.g., hovering) or may be a user clicking on the marker 810.
  • FIG. 9 shows an interface for providing representative frames for a video with a player interface 900 according to one embodiment.
  • the representative frames are displayed as a list 910.
  • the list of representative frames may also be sorted according to the relevance score of the frames.
  • the list of representative frames may be selected based on the user's profile, a search, or other indications of frames that may be of interest to the user.
  • the list of representative frames permits a user to review and select a select a representative frame without impacting the viewing area of the video.
  • the video hosting service 100 begins playback of the video at the time of the representative frame or related segment, permitting the user to quickly seek the portion of the video of interest to the user.
  • users can effectively identify portions of a video that are of interest to the user, and are query or user specific. These portions of the video are presented to the user in ways that permit the user to determine whether the representative frames for one or more videos are of interest to the user.
  • Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
  • the present disclosure also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic- optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of non-transient computer-readable storage medium suitable for storing electronic instructions.
  • the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • the present disclosure is well suited to a wide variety of computer network systems over numerous topologies.
  • the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur pour sélectionner des trames représentatives pour des vidéos. Le procédé consiste à recevoir une vidéo et à identifier un ensemble de caractéristiques pour chacune des trames de la vidéo. Les caractéristiques comprennent des caractéristiques à base de trame et des caractéristiques sémantiques. Les caractéristiques sémantiques identifient des probabilités de concepts sémantiques qui sont présents sous forme de contenu dans les trames de la vidéo. Un ensemble de segments vidéo pour la vidéo est ensuite généré. Chaque segment vidéo comprend un sous-ensemble chronologique de trames provenant de la vidéo et chaque trame est associée à au moins une des caractéristiques sémantiques. Le procédé génère un score pour chaque trame du sous-ensemble de trames pour chaque segment vidéo, sur la base au moins des caractéristiques sémantiques, et sélectionne une trame représentative pour chaque segment vidéo sur la base des scores des trames dans le segment vidéo. La trame représentative représente et résume le segment vidéo.
PCT/IB2015/056783 2014-09-08 2015-09-05 Sélection et présentation de trames représentatives pour des prévisualisations vidéo WO2016038522A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15839919.6A EP3192273A4 (fr) 2014-09-08 2015-09-05 Sélection et présentation de trames représentatives pour des prévisualisations vidéo
CN201580034616.3A CN107077595A (zh) 2014-09-08 2015-09-05 选择和呈现代表性帧以用于视频预览

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462047639P 2014-09-08 2014-09-08
US62/047,639 2014-09-08
US201562120107P 2015-02-24 2015-02-24
US62/120,107 2015-02-24

Publications (1)

Publication Number Publication Date
WO2016038522A1 true WO2016038522A1 (fr) 2016-03-17

Family

ID=55437782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/056783 WO2016038522A1 (fr) 2014-09-08 2015-09-05 Sélection et présentation de trames représentatives pour des prévisualisations vidéo

Country Status (4)

Country Link
US (3) US9953222B2 (fr)
EP (1) EP3192273A4 (fr)
CN (1) CN107077595A (fr)
WO (1) WO2016038522A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076380A1 (fr) * 2016-10-31 2018-05-03 华为技术有限公司 Dispositif électronique et procédé de génération de vignette vidéo dans un dispositif électronique
WO2019198951A1 (fr) * 2018-04-10 2019-10-17 삼성전자 주식회사 Dispositif électronique et procédé de fonctionnement de celui-ci
EP3798866A1 (fr) * 2019-09-24 2021-03-31 Facebook Inc. Génération de vignettes personnalisées et sélection pour le contenu numérique à l'aide de la vision par ordinateur et de l'apprentissage automatique

Families Citing this family (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8923607B1 (en) * 2010-12-08 2014-12-30 Google Inc. Learning sports highlights using event detection
WO2016038522A1 (fr) 2014-09-08 2016-03-17 Google Inc. Sélection et présentation de trames représentatives pour des prévisualisations vidéo
KR102306538B1 (ko) * 2015-01-20 2021-09-29 삼성전자주식회사 콘텐트 편집 장치 및 방법
US10440443B2 (en) * 2015-02-04 2019-10-08 Mobitv, Inc. Intermediate key frame selection and animation
US10440076B2 (en) * 2015-03-10 2019-10-08 Mobitv, Inc. Media seek mechanisms
US9449248B1 (en) 2015-03-12 2016-09-20 Adobe Systems Incorporated Generation of salient contours using live video
US9659218B1 (en) * 2015-04-29 2017-05-23 Google Inc. Predicting video start times for maximizing user engagement
US20160378863A1 (en) * 2015-06-24 2016-12-29 Google Inc. Selecting representative video frames for videos
US11158344B1 (en) * 2015-09-30 2021-10-26 Amazon Technologies, Inc. Video ingestion and clip creation
US10230866B1 (en) 2015-09-30 2019-03-12 Amazon Technologies, Inc. Video ingestion and clip creation
US10204273B2 (en) * 2015-10-20 2019-02-12 Gopro, Inc. System and method of providing recommendations of moments of interest within video clips post capture
US10229324B2 (en) * 2015-12-24 2019-03-12 Intel Corporation Video summarization using semantic information
JP6731178B2 (ja) * 2016-03-07 2020-07-29 富士ゼロックス株式会社 動画検索装置およびプログラム
US9866887B2 (en) * 2016-03-08 2018-01-09 Flipboard, Inc. Auto video preview within a digital magazine
US10049279B2 (en) 2016-03-11 2018-08-14 Qualcomm Incorporated Recurrent networks with motion-based attention for video understanding
JP2017204753A (ja) * 2016-05-11 2017-11-16 富士通株式会社 フレーム抽出方法、動画再生制御方法、プログラム、フレーム抽出装置及び動画再生制御装置
US20170337273A1 (en) * 2016-05-17 2017-11-23 Opentv, Inc Media file summarizer
US11523188B2 (en) * 2016-06-30 2022-12-06 Disney Enterprises, Inc. Systems and methods for intelligent media content segmentation and analysis
US10347294B2 (en) * 2016-06-30 2019-07-09 Google Llc Generating moving thumbnails for videos
US10645142B2 (en) * 2016-09-20 2020-05-05 Facebook, Inc. Video keyframes display on online social networks
KR20180036153A (ko) * 2016-09-30 2018-04-09 주식회사 요쿠스 영상 편집 시스템 및 방법
KR20180058019A (ko) * 2016-11-23 2018-05-31 한화에어로스페이스 주식회사 영상 검색 장치, 데이터 저장 방법 및 데이터 저장 장치
US10482126B2 (en) * 2016-11-30 2019-11-19 Google Llc Determination of similarity between videos using shot duration correlation
CN106528884B (zh) * 2016-12-15 2019-01-11 腾讯科技(深圳)有限公司 一种信息展示图片生成方法及装置
US10631028B2 (en) 2016-12-19 2020-04-21 Sony Interactive Entertainment LLC Delivery of third party content on a first party portal
US10430661B2 (en) * 2016-12-20 2019-10-01 Adobe Inc. Generating a compact video feature representation in a digital medium environment
US10366132B2 (en) 2016-12-28 2019-07-30 Sony Interactive Entertainment LLC Delivering customized content using a first party portal service
US10419384B2 (en) * 2017-01-06 2019-09-17 Sony Interactive Entertainment LLC Social network-defined video events
US10671852B1 (en) 2017-03-01 2020-06-02 Matroid, Inc. Machine learning in video classification
US10268897B2 (en) 2017-03-24 2019-04-23 International Business Machines Corporation Determining most representative still image of a video for specific user
US10409859B2 (en) * 2017-05-15 2019-09-10 Facebook, Inc. Video heat maps personalized for online system users
IT201700053345A1 (it) * 2017-05-17 2018-11-17 Metaliquid S R L Metodo ed apparecchiatura per l’analisi di contenuti video in formato digitale
US11822591B2 (en) * 2017-09-06 2023-11-21 International Business Machines Corporation Query-based granularity selection for partitioning recordings
CN107613373B (zh) * 2017-09-12 2019-07-30 中广热点云科技有限公司 一种多屏连续观看电视节目的方法
CN107872724A (zh) * 2017-09-26 2018-04-03 五八有限公司 一种预览视频生成方法及装置
CN109756767B (zh) * 2017-11-06 2021-12-14 腾讯科技(深圳)有限公司 预览数据播放方法、装置及存储介质
US10521705B2 (en) * 2017-11-14 2019-12-31 Adobe Inc. Automatically selecting images using multicontext aware ratings
CN107832725A (zh) * 2017-11-17 2018-03-23 北京奇虎科技有限公司 基于评价指标的视频封面提取方法及装置
CN107958030B (zh) * 2017-11-17 2021-08-24 北京奇虎科技有限公司 视频封面推荐模型优化方法及装置
CN107918656A (zh) * 2017-11-17 2018-04-17 北京奇虎科技有限公司 基于视频标题的视频封面提取方法及装置
US10445586B2 (en) 2017-12-12 2019-10-15 Microsoft Technology Licensing, Llc Deep learning on image frames to generate a summary
US20190199763A1 (en) * 2017-12-22 2019-06-27 mindHIVE Inc. Systems and methods for previewing content
US10932006B2 (en) * 2017-12-22 2021-02-23 Facebook, Inc. Systems and methods for previewing content
CN108377417B (zh) * 2018-01-17 2019-11-26 百度在线网络技术(北京)有限公司 视频审核方法、装置、计算机设备及存储介质
US10474903B2 (en) * 2018-01-25 2019-11-12 Adobe Inc. Video segmentation using predictive models trained to provide aesthetic scores
CN108307229B (zh) * 2018-02-02 2023-12-22 新华智云科技有限公司 一种影音数据的处理方法及设备
US10679069B2 (en) 2018-03-27 2020-06-09 International Business Machines Corporation Automatic video summary generation
US10733984B2 (en) * 2018-05-07 2020-08-04 Google Llc Multi-modal interface in a voice-activated network
US11430312B2 (en) * 2018-07-05 2022-08-30 Movidius Limited Video surveillance with neural networks
CN109040823B (zh) * 2018-08-20 2021-06-04 青岛海信传媒网络技术有限公司 一种书签展示的方法及装置
CN110868630A (zh) * 2018-08-27 2020-03-06 北京优酷科技有限公司 预告片的生成方法及装置
EP3652641B1 (fr) 2018-09-18 2022-03-02 Google LLC Procédés et systèmes pour le traitement d'imagerie
CN109740499B (zh) * 2018-12-28 2021-06-11 北京旷视科技有限公司 视频分割方法、视频动作识别方法、装置、设备及介质
US11695812B2 (en) * 2019-01-14 2023-07-04 Dolby Laboratories Licensing Corporation Sharing physical writing surfaces in videoconferencing
US11080532B2 (en) * 2019-01-16 2021-08-03 Mediatek Inc. Highlight processing method using human pose based triggering scheme and associated system
KR102613328B1 (ko) 2019-01-17 2023-12-14 삼성전자주식회사 디스플레이장치 및 그 제어방법
CN111836118B (zh) * 2019-04-19 2022-09-06 百度在线网络技术(北京)有限公司 视频处理方法、装置、服务器及存储介质
US20200380030A1 (en) * 2019-05-31 2020-12-03 Adobe Inc. In-application video navigation system
CN110347872B (zh) * 2019-07-04 2023-10-24 腾讯科技(深圳)有限公司 视频封面图像提取方法及装置、存储介质及电子设备
CN110381339B (zh) * 2019-08-07 2021-08-27 腾讯科技(深圳)有限公司 图片传输方法及装置
CN110650379B (zh) * 2019-09-26 2022-04-01 北京达佳互联信息技术有限公司 视频摘要生成方法、装置、电子设备及存储介质
CN110704681B (zh) * 2019-09-26 2023-03-24 三星电子(中国)研发中心 一种生成视频的方法及系统
US10998007B2 (en) * 2019-09-30 2021-05-04 Adobe Inc. Providing context aware video searching
US11500927B2 (en) * 2019-10-03 2022-11-15 Adobe Inc. Adaptive search results for multimedia search queries
CN110909205B (zh) * 2019-11-22 2023-04-07 北京金山云网络技术有限公司 一种视频封面确定方法、装置、电子设备及可读存储介质
CN110856037B (zh) * 2019-11-22 2021-06-22 北京金山云网络技术有限公司 一种视频封面确定方法、装置、电子设备及可读存储介质
KR102285039B1 (ko) * 2019-12-12 2021-08-03 한국과학기술원 다중 클래스화를 이용한 샷 경계 검출 방법 및 장치
CN113132752B (zh) * 2019-12-30 2023-02-24 阿里巴巴集团控股有限公司 视频处理方法及装置
EP3852059A1 (fr) 2020-01-15 2021-07-21 General Electric Company Système et procédé d'évaluation de l'état de santé d'un actif
CN111277892B (zh) * 2020-01-20 2022-03-22 北京百度网讯科技有限公司 用于选取视频片段的方法、装置、服务器和介质
CN111464833B (zh) * 2020-03-23 2023-08-04 腾讯科技(深圳)有限公司 目标图像生成方法、目标图像生成装置、介质及电子设备
CN113438500B (zh) * 2020-03-23 2023-03-24 阿里巴巴集团控股有限公司 视频处理方法、装置、电子设备及计算机存储介质
CN113453055B (zh) * 2020-03-25 2022-12-27 华为技术有限公司 一种生成视频缩略图的方法、装置和电子设备
TWI741550B (zh) * 2020-03-31 2021-10-01 國立雲林科技大學 書籤影格的生成方法、自動生成書籤的影音播放裝置及其使用者介面
CN111836072B (zh) * 2020-05-21 2022-09-13 北京嘀嘀无限科技发展有限公司 视频处理方法、装置、设备和存储介质
CN111818363A (zh) * 2020-07-10 2020-10-23 携程计算机技术(上海)有限公司 短视频提取方法、系统、设备及存储介质
US11636677B2 (en) * 2021-01-08 2023-04-25 Huawei Technologies Co., Ltd. Systems, devices and methods for distributed hierarchical video analysis
US11711579B1 (en) * 2021-01-25 2023-07-25 Amazon Technologies, Inc. Navigation integrated content stream
US11893792B2 (en) * 2021-03-25 2024-02-06 Adobe Inc. Integrating video content into online product listings to demonstrate product features
US11678030B2 (en) * 2021-07-16 2023-06-13 Rovi Guides, Inc. Personalized screencaps for trickplay slider
WO2023130326A1 (fr) * 2022-01-06 2023-07-13 Huawei Technologies Co., Ltd. Procédés et dispositifs permettant de générer un segment vidéo personnalisé sur la base de caractéristiques de contenu
CN114445754A (zh) * 2022-01-29 2022-05-06 北京有竹居网络技术有限公司 视频处理方法、装置、可读介质及电子设备
CN115278355B (zh) * 2022-06-20 2024-02-13 北京字跳网络技术有限公司 视频剪辑方法、装置、设备、计算机可读存储介质及产品
US20240012555A1 (en) * 2022-07-07 2024-01-11 Google Llc Identifying and navigating to a visual item on a web page

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6535639B1 (en) * 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US20080155627A1 (en) * 2006-12-04 2008-06-26 O'connor Daniel Systems and methods of searching for and presenting video and audio
US7627823B2 (en) * 1998-12-28 2009-12-01 Sony Corporation Video information editing method and editing device
US20110267544A1 (en) * 2010-04-28 2011-11-03 Microsoft Corporation Near-lossless video summarization
US20130114902A1 (en) * 2011-11-04 2013-05-09 Google Inc. High-Confidence Labeling of Video Volumes in a Video Sharing Service

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6097853A (en) * 1996-09-11 2000-08-01 Da Vinci Systems, Inc. User definable windows for selecting image processing regions
US6721454B1 (en) * 1998-10-09 2004-04-13 Sharp Laboratories Of America, Inc. Method for automatic extraction of semantically significant events from video
KR100319158B1 (ko) * 1999-08-26 2001-12-29 구자홍 사건구간 기반 동영상 자료 생성방법 및 동영상 검색 방법
US6697523B1 (en) * 2000-08-09 2004-02-24 Mitsubishi Electric Research Laboratories, Inc. Method for summarizing a video using motion and color descriptors
US7325199B1 (en) * 2000-10-04 2008-01-29 Apple Inc. Integrated time line for editing
JP4595263B2 (ja) * 2001-07-31 2010-12-08 アイシン精機株式会社 弁開閉時期制御装置
US8311344B2 (en) * 2008-02-15 2012-11-13 Digitalsmiths, Inc. Systems and methods for semantically classifying shots in video
KR20110023878A (ko) 2008-06-09 2011-03-08 코닌클리케 필립스 일렉트로닉스 엔.브이. 오디오/비주얼 데이터 스트림의 요약을 생성하기 위한 방법 및 장치
US8213725B2 (en) 2009-03-20 2012-07-03 Eastman Kodak Company Semantic event detection using cross-domain knowledge
US20110047163A1 (en) 2009-08-24 2011-02-24 Google Inc. Relevance-Based Image Selection
US20110262103A1 (en) * 2009-09-14 2011-10-27 Kumar Ramachandran Systems and methods for updating video content with linked tagging information
CN101778257B (zh) * 2010-03-05 2011-10-26 北京邮电大学 用于数字视频点播中的视频摘要片断的生成方法
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video
EP2638509A4 (fr) 2010-11-11 2015-06-03 Google Inc Étiquettes d'apprentissage pour commentaire vidéo utilisant des sous-étiquettes latentes
US9355635B2 (en) * 2010-11-15 2016-05-31 Futurewei Technologies, Inc. Method and system for video summarization
KR101512584B1 (ko) * 2010-12-09 2015-04-15 노키아 코포레이션 비디오 시퀀스로부터의 제한된 콘텍스트 기반 식별 키 프레임
US10134440B2 (en) * 2011-05-03 2018-11-20 Kodak Alaris Inc. Video summarization using audio and visual cues
US8473981B1 (en) * 2011-06-30 2013-06-25 Google Inc. Augmenting metadata of digital media objects using per object classifiers
US8867891B2 (en) * 2011-10-10 2014-10-21 Intellectual Ventures Fund 83 Llc Video concept classification using audio-visual grouplets
CN102663015B (zh) * 2012-03-21 2015-05-06 上海大学 基于特征袋模型和监督学习的视频语义标注方法
US20130300939A1 (en) * 2012-05-11 2013-11-14 Cisco Technology, Inc. System and method for joint speaker and scene recognition in a video/audio processing environment
KR101976178B1 (ko) * 2012-06-05 2019-05-08 엘지전자 주식회사 이동 단말기 및 이동 단말기의 제어 방법
US8989503B2 (en) * 2012-08-03 2015-03-24 Kodak Alaris Inc. Identifying scene boundaries using group sparsity analysis
US9274678B2 (en) * 2012-09-13 2016-03-01 Google Inc. Identifying a thumbnail image to represent a video
CN103761284B (zh) * 2014-01-13 2018-08-14 中国农业大学 一种视频检索方法和系统
CN103905824A (zh) * 2014-03-26 2014-07-02 深圳先进技术研究院 视频语义检索与压缩同步的摄像系统与方法
WO2016038522A1 (fr) 2014-09-08 2016-03-17 Google Inc. Sélection et présentation de trames représentatives pour des prévisualisations vidéo
US10229324B2 (en) * 2015-12-24 2019-03-12 Intel Corporation Video summarization using semantic information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7627823B2 (en) * 1998-12-28 2009-12-01 Sony Corporation Video information editing method and editing device
US6535639B1 (en) * 1999-03-12 2003-03-18 Fuji Xerox Co., Ltd. Automatic video summarization using a measure of shot importance and a frame-packing method
US20080155627A1 (en) * 2006-12-04 2008-06-26 O'connor Daniel Systems and methods of searching for and presenting video and audio
US20110267544A1 (en) * 2010-04-28 2011-11-03 Microsoft Corporation Near-lossless video summarization
US20130114902A1 (en) * 2011-11-04 2013-05-09 Google Inc. High-Confidence Labeling of Video Volumes in a Video Sharing Service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3192273A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076380A1 (fr) * 2016-10-31 2018-05-03 华为技术有限公司 Dispositif électronique et procédé de génération de vignette vidéo dans un dispositif électronique
US10860857B2 (en) 2016-10-31 2020-12-08 Huawei Technologies Co., Ltd. Method for generating video thumbnail on electronic device, and electronic device
WO2019198951A1 (fr) * 2018-04-10 2019-10-17 삼성전자 주식회사 Dispositif électronique et procédé de fonctionnement de celui-ci
KR20190118415A (ko) * 2018-04-10 2019-10-18 삼성전자주식회사 전자 장치 및 그 동작 방법
KR102464907B1 (ko) * 2018-04-10 2022-11-09 삼성전자주식회사 전자 장치 및 그 동작 방법
US11627383B2 (en) 2018-04-10 2023-04-11 Samsung Electronics Co., Ltd. Electronic device and operation method thereof
EP3798866A1 (fr) * 2019-09-24 2021-03-31 Facebook Inc. Génération de vignettes personnalisées et sélection pour le contenu numérique à l'aide de la vision par ordinateur et de l'apprentissage automatique

Also Published As

Publication number Publication date
EP3192273A1 (fr) 2017-07-19
US12014542B2 (en) 2024-06-18
US10867183B2 (en) 2020-12-15
CN107077595A (zh) 2017-08-18
US20160070962A1 (en) 2016-03-10
EP3192273A4 (fr) 2018-05-23
US9953222B2 (en) 2018-04-24
US20180239964A1 (en) 2018-08-23
US20210166035A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
US12014542B2 (en) Selecting and presenting representative frames for video previews
US11693902B2 (en) Relevance-based image selection
US20210166072A1 (en) Learning highlights using event detection
US8804999B2 (en) Video recommendation system and method thereof
US8983192B2 (en) High-confidence labeling of video volumes in a video sharing service
US9715731B2 (en) Selecting a high valence representative image
US9087242B2 (en) Video synthesis using video volumes
US9176987B1 (en) Automatic face annotation method and system
JP2009095013A (ja) ビデオ要約システムおよびビデオ要約のためのコンピュータプログラム
WO2012141655A1 (fr) Annotation de produit vidéo avec exploration d'informations web
US20230140369A1 (en) Customizable framework to extract moments of interest
Fei et al. Creating memorable video summaries that satisfy the user’s intention for taking the videos
Ren et al. Activity-driven content adaptation for effective video summarization
JP4995770B2 (ja) 画像辞書生成装置,画像辞書生成方法,および画像辞書生成プログラム
Liu et al. Within and between shot information utilisation in video key frame extraction
US8880534B1 (en) Video classification boosting
US20170177577A1 (en) Biasing scrubber for digital content
Chen et al. A simplified approach to rushes summarization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15839919

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015839919

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE