CN113542818A - Video display method, video editing method and device - Google Patents

Video display method, video editing method and device Download PDF

Info

Publication number
CN113542818A
CN113542818A CN202110807225.8A CN202110807225A CN113542818A CN 113542818 A CN113542818 A CN 113542818A CN 202110807225 A CN202110807225 A CN 202110807225A CN 113542818 A CN113542818 A CN 113542818A
Authority
CN
China
Prior art keywords
information
video
sub
video material
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110807225.8A
Other languages
Chinese (zh)
Other versions
CN113542818B (en
Inventor
宋旸
白刚
黄鑫
徐祯辉
肖洋
邢欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110807225.8A priority Critical patent/CN113542818B/en
Publication of CN113542818A publication Critical patent/CN113542818A/en
Application granted granted Critical
Publication of CN113542818B publication Critical patent/CN113542818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The present disclosure provides a video display method, a video editing method and a video editing device, including: acquiring a video material; carrying out scene recognition on the obtained video material, and splitting the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations; acquiring information editing information of the sub-video material; and generating push video content based on the sub-video materials and the corresponding information editing information.

Description

Video display method, video editing method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video display method, a video editing method, and an apparatus.
Background
In the related art, when information is pushed, a pushing manner is based on each video. Generally, professional staff carry out video shooting, information is added to the shot video, and the video with the added information is sent to each user side, so that the information is pushed.
However, in the related art, when shooting a video and adding information to the video, the shooting is generally performed manually, and the processing method of the video is inefficient.
Disclosure of Invention
The embodiment of the disclosure at least provides a video display method, a video editing method and a video editing device.
In a first aspect, an embodiment of the present disclosure provides a video display method, including:
acquiring pushed video content, wherein the pushed video content comprises at least one sub-video material and information editing information corresponding to the sub-video material;
playing the pushed video content in a page, and displaying the information editing information on the pushed video content in an overlapping manner based on the editing attribute corresponding to the information editing information;
and responding to the trigger operation aiming at the pushed video content, and jumping from the page to an information page corresponding to the pushed video content.
In one possible embodiment, the information editing information includes at least one of the following information:
the method comprises the following steps of providing barrage information, source information of a sub-video material, content description information of the video material and search information for indicating search;
the edit attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, the pushing an information page corresponding to the video content includes: and the source page corresponding to the pushed video content or the download page of the target application program corresponding to the pushed video content.
In a second aspect, an embodiment of the present disclosure provides a video editing method, including:
acquiring a video material;
carrying out scene recognition on the obtained video material, and splitting the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations;
acquiring information editing information of the sub-video material;
and generating push video content based on the sub-video materials and the corresponding information editing information.
In one possible embodiment, the acquiring the information editing information of the sub-video material includes:
determining information editing information of the sub-video materials according to material information corresponding to the sub-video materials and/or different user attribute information; the material information comprises at least one of a material type, a scene type and target object information in the material.
In one possible embodiment, the determining information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the different user attribute information includes:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between the different user attribute information and the information editing information.
In a possible embodiment, the performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material includes:
sampling the video material to obtain a plurality of sampled video frames;
determining color information of each pixel point in each sampling video frame aiming at each sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining the segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible embodiment, the color information of the pixel point includes first color information and/or second color information;
the first color information comprises values of the pixel points on a red channel, a green channel and a blue channel respectively; the second color information comprises hue, saturation and brightness.
In one possible embodiment, determining the slicing time point of the video material based on the color mean of each sampled video frame includes:
determining a segmented video frame based on a difference value between color mean values of adjacent sampled video frames;
and taking the time point of the segmentation video frame corresponding to the video material as the segmentation time point of the video material.
In a possible embodiment, the performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material includes:
acquiring interactive information aiming at the video material information;
determining at least one sub-video material from the video materials based on the interaction information.
In one possible embodiment, the determining at least one sub-video material from the video materials based on the interaction information includes:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the presence of a gas in the gas,
and detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on an interaction timestamp of the target interaction information.
In a possible embodiment, the performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material includes:
acquiring interactive data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target timestamps based on the interactive data respectively corresponding to the plurality of playing schedules;
and splitting the video material according to the at least one pair of target timestamps to obtain at least one sub-video material.
In one possible implementation, the generating the push video content based on the sub-video material and the corresponding information editing information includes:
acquiring a display template corresponding to the sub-video material;
respectively determining the display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video material and the information editing information in the display template according to the determined display position information to generate the push video content.
In one possible embodiment, the obtaining a presentation template corresponding to the sub-video material includes:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; alternatively, the first and second electrodes may be,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In one possible embodiment, determining the display position information of the sub-video material and the information editing information in the display template according to the display template respectively includes:
according to the sub-video materials and the corresponding information editing information, determining size information of a first display area corresponding to the information editing information and size information of a second display area corresponding to the information editing information;
and determining the display position information of the sub-video material and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
In one possible implementation, the generating push video content based on the sub-video material and the corresponding information editing information includes:
determining attribute information of the sub-video materials;
screening out a target sub-video material from the at least one sub-video material based on the attribute information of the sub-video material;
and generating the push video content based on the target sub-video material and the information editing information corresponding to the target sub-video material.
In one possible embodiment, the attribute information of the sub-video material includes at least one of the following information:
the playing time, the watching times and the number of barrages.
In a third aspect, an embodiment of the present disclosure provides a video display apparatus, including:
the first acquisition module is used for acquiring pushed video content, and the pushed video content comprises sub video materials and corresponding information editing information;
the display module is used for playing the pushed video content in a page and displaying the information editing information on the pushed video content in an overlapping manner based on the editing attribute corresponding to the information editing information;
and the response module is used for responding to the triggering operation aiming at the pushed video content and jumping from the page to an information page corresponding to the pushed video content.
In one possible embodiment, the information editing information includes at least one of the following information:
the method comprises the following steps of providing barrage information, source information of a sub-video material, content description information of the video material and search information for indicating search;
the edit attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, the pushing an information page corresponding to the video content includes: and the source page corresponding to the pushed video content or the download page of the target application program corresponding to the pushed video content.
In a fourth aspect, an embodiment of the present disclosure provides a video editing apparatus, including:
the second acquisition module is used for acquiring the video material;
the determining module is used for carrying out scene identification on the obtained video material and splitting the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations;
the third acquisition module is used for acquiring the information editing information of the sub-video materials;
and the generating module is used for generating push video content based on the sub-video material and the corresponding information editing information.
In a possible implementation manner, the third obtaining module, when obtaining the information editing information of the sub video material, is configured to:
determining information editing information of the sub-video materials according to material information corresponding to the sub-video materials and/or different user attribute information; the material information comprises at least one of a material type, a scene type and target object information in the material.
In a possible implementation manner, when determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the different user attribute information, the third obtaining module is configured to:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between the different user attribute information and the information editing information.
In a possible embodiment, the determining module, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, is configured to:
sampling the video material to obtain a plurality of sampled video frames;
determining color information of each pixel point in each sampling video frame aiming at each sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining the segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible embodiment, the color information of the pixel point includes first color information and/or second color information;
the first color information comprises values of the pixel points on a red channel, a green channel and a blue channel respectively; the second color information comprises hue, saturation and brightness.
In one possible embodiment, the determining module, when determining the slicing time point of the video material based on the color mean of each sampled video frame, is configured to:
determining a segmented video frame based on a difference value between color mean values of adjacent sampled video frames;
and taking the time point of the segmentation video frame corresponding to the video material as the segmentation time point of the video material.
In a possible embodiment, the determining module, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, is configured to:
acquiring interactive information aiming at the video material information;
determining at least one sub-video material from the video materials based on the interaction information.
In a possible embodiment, the determining module, when determining at least one sub-video material from the video materials based on the interaction information, is configured to:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the presence of a gas in the gas,
and detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on an interaction timestamp of the target interaction information.
In a possible embodiment, the determining module, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, is configured to:
acquiring interactive data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target timestamps based on the interactive data respectively corresponding to the plurality of playing schedules;
and splitting the video material according to the at least one pair of target timestamps to obtain at least one sub-video material.
In one possible implementation, the generating module, when generating the push video content based on the sub video material and the corresponding information editing information, is configured to:
acquiring a display template corresponding to the sub-video material;
respectively determining the display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video material and the information editing information in the display template according to the determined display position information to generate the push video content.
In one possible embodiment, the generating module, when obtaining the presentation template corresponding to the sub-video material, is configured to:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; alternatively, the first and second electrodes may be,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In a possible implementation manner, the generating module, when determining, according to the presentation template, presentation position information of the sub video material and the information editing information in the presentation template, is configured to:
according to the sub-video material and the corresponding information editing information, determining size information of a second display area corresponding to the information editing information and size information of a third display area corresponding to the information editing information;
and determining the display position information of the sub-video material and the information editing information in the display template according to the size information of the second display area and the size information of the third display area.
In one possible implementation, the generating module, when generating the push video content based on the sub video material and the corresponding information editing information, is configured to:
determining attribute information of the sub-video materials;
screening out a target sub-video material from the at least one sub-video material based on the attribute information of the sub-video material;
and generating the push video content based on the target sub-video material and the information editing information corresponding to the target sub-video material.
In one possible embodiment, the attribute information of the sub-video material includes at least one of the following information:
the playing time, the watching times and the number of barrages.
In a fifth aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any one of the possible implementations of the first aspect, or the second aspect described above.
In a sixth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the first aspect, or any one of the possible implementations of the first aspect, or performs the steps in the second aspect.
The video display method, the video editing method and the video editing device provided by the embodiment of the disclosure can identify scenes of video materials, divide the video materials into at least one sub-video material corresponding to different video scenes, and automatically generate information push information based on the sub-video materials and corresponding information editing information.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a video editing method provided by an embodiment of the present disclosure;
fig. 2 shows a flowchart of a method for determining a sub-video material according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart illustrating a video presentation method provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a page showing pushed video content provided by an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating an architecture of a video display apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating an architecture of a video editing apparatus provided in an embodiment of the present disclosure;
FIG. 7 shows a schematic structural diagram of a computer device 700 provided by an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of a computer device 800 provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In the related art, the process of adding the video clips and the information editing information is manually completed, and the video processing mode is low in efficiency.
Based on this, the present disclosure provides a video display method, a video editing method and an apparatus, which can perform scene recognition on a video material, divide the video material into at least one sub-video material corresponding to different video scenes, and automatically generate information push information based on the sub-video material and corresponding information editing information.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a video editing method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the video editing method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the video editing method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a video editing method provided in an embodiment of the present disclosure is shown, where the method includes steps 101 to 104, where:
and step 101, acquiring a video material.
102, carrying out scene recognition on an obtained video material, and splitting the video material into at least one sub-video material; and the sub-video material and the video scene have a corresponding relation.
Step 103, obtaining the information editing information of the sub video material.
And 104, generating push video content based on the sub video material and the corresponding information editing information.
The following is a detailed description of the above steps 101 to 104.
For step 101:
the acquiring of the video material may be acquiring the video material input by the user, or acquiring the video material pre-stored in a local database. In one possible implementation, the video material may also be obtained from a cloud server.
With respect to step 102:
in a specific implementation, when performing scene recognition on an acquired video material and splitting the video material into at least one sub-video material, the method shown in fig. 2 may be referred to, and includes the following steps:
step 201, sampling the video material to obtain a plurality of sampled video frames.
The video material may include a plurality of video frames, and in order to improve processing efficiency, the plurality of video frames included in the video material may be sampled, for example, the plurality of sampled video frames may be obtained by sampling at preset time intervals, where the length of the preset time intervals may be dynamically adjusted according to different video materials.
Step 202, determining color information of each pixel point in each sampled video frame according to each sampled video frame.
The color information of each pixel point in the sampling video frame comprises first color information and/or second color information, the first color information comprises values of the pixel points on a red channel, a green channel and a blue channel respectively, and the second color information comprises hue, saturation and brightness.
Illustratively, if a sampled video frame includes M × N pixel points, and the color information of each pixel point includes values of the pixel point in three channels, red, green and blue, the values of the pixel point in the three channels, red, blue and blue need to be determined for each pixel point in the sampled video frame.
And 203, calculating the average value of the color information in the sampling video frame to obtain a color average value.
The color information of each pixel point in the sampling video frame comprises a plurality of values, when the color mean value is calculated, the mean value calculation can be carried out on the plurality of values of the color information corresponding to each pixel point aiming at each pixel point, the pixel color mean value corresponding to the pixel point is obtained, and the mean value calculation is carried out on the pixel color mean values corresponding to the plurality of pixel points in the sampling video frame aiming at each sampling video frame, so that the color mean value corresponding to the sampling video frame is obtained.
Illustratively, if a sampled video frame includes 1024 × 1024 pixels, color information corresponding to each pixel includes hue, saturation, and brightness, for each pixel, the hue, saturation, and brightness corresponding to the pixel are summed, divided by 3, and the average value is calculated to obtain a pixel color average value corresponding to the pixel, and then the 1024 × 1024 pixels are summed, divided by 1024 × 1024, and the color average value corresponding to the sampled video frame is obtained.
And 204, determining the segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
The color mean values of the sampled video frames in the same scene are similar, so that the sampled video frames in the same scene in the video material can be identified based on the color mean values of the sampled video frames.
In one possible implementation, when determining the segmentation time point of the video material, the segmentation video frame may be determined based on a difference value between color mean values of adjacent sampled video frames, and then a corresponding time point of the segmentation video frame in the video material may be used as the segmentation time point of the video material.
In a specific implementation, if a difference value of a color mean value between any two adjacent sampled video frames is greater than a preset difference value, a video frame, which is in front of a corresponding time point in a video material, of the two adjacent sampled video frames may be used as a split video frame, that is, a video frame appearing in the video material first is used as the split video frame.
For example, if the video frame a and the video frame B are two adjacent sampled video frames, the video frame a appears in the video material more than the video frame B, and if a difference value between color mean values of the video frame a and the video frame B is greater than a preset difference value, a corresponding time point of the video frame a in the video material may be used as a segmentation time point of the video material.
Here, it should be noted that the same video material may include at least one scene, for example, the video material may include only an office scene, or may include an office scene, a restaurant scene, an outdoor scene, and the like at the same time, so that at least one split time point corresponding to each video material is provided, and at least one sub video material corresponding to each video material is provided.
In another possible implementation, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, the following steps may be performed:
A. and acquiring interactive information aiming at the video material information.
B. Determining at least one sub-video material from the video materials based on the interaction information.
Here, the interactive information may exemplarily include at least one of a bullet screen, a comment, a gift, and the like. In determining at least one sub-video material from the video materials based on the interaction information, the at least one sub-video material may be determined by any one or more of the following methods:
method B1, determining an interaction timestamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction timestamp of the interaction information.
And B2, detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on the interaction timestamp of the target interaction information.
Here, the interaction timestamp is a timestamp of the interaction information sent by the other user when the video material information is played, and the timestamp may be a timestamp of the relative video material information, for example, the bullet screen information received at the 17 th second of the video material information playing.
In a possible implementation manner, for the method B1, when determining at least one sub-video material from the video materials based on the interaction timestamp corresponding to the interaction information, the sub-video material may be first divided into a plurality of playing time intervals, for example, one second may be used as one playing time interval, and then based on the interaction timestamp corresponding to the interaction information, heat information corresponding to each playing time interval is determined, where the heat information is used to indicate the attention degree of other users to the playing content corresponding to the playing time interval; and then, in videos corresponding to the video material information, videos corresponding to N continuous playing time intervals with corresponding heat information meeting preset conditions are used as sub-video materials of the video material information, wherein N is a positive integer.
Here, the popularity information may be the number of the interactive information, or different popularity values may be given to different interactive information, and the popularity value of the interactive information including the preset keyword is higher, so that the popularity information corresponding to each playing time interval may be determined by correspondingly summing the popularity values of the interactive information of each playing time interval.
In a possible implementation manner, for method B2, the determining at least one sub-video material from the video material based on the interaction timestamp of the target interaction information may be exemplarily understood as determining a time interval in which the target interaction information occurs at a frequency higher than a preset frequency based on the interaction timestamp of the target interaction information, and then taking a sub-video corresponding to the time interval in the video material as the sub-video material.
Or, in another possible implementation, when performing scene recognition on an acquired video material and splitting the video material into at least one sub-video material, interactive data corresponding to the video material at a plurality of playing schedules respectively may be acquired first, then at least one pair of target timestamps is determined based on the interactive data corresponding to the plurality of playing schedules respectively, and then the video material is split according to the at least one pair of target timestamps, so as to obtain at least one sub-video material.
Here, the interactive data corresponding to the video material at the plurality of playing schedules respectively may be understood as the number of people the video material is watching at the plurality of playing schedules respectively; determining at least one pair of target timestamps based on the interactive data respectively corresponding to the plurality of playing schedules, wherein the determining of the playing interval with the number of people larger than the preset number of people can be understood as the number of people watching respectively corresponding to the plurality of playing schedules, and the timestamp corresponding to the playing interval is the target timestamp; the splitting of the video material according to the at least one pair of target timestamps may be understood as taking a sub-video corresponding to a pair of target timestamps in the video material as a sub-video material.
In one possible implementation, if the target timestamps of two sub-video materials are adjacent, the two sub-video materials may be merged to be one sub-video material.
For step 103:
wherein, the information editing information of the sub-video material may include at least one of the following information:
the method comprises the steps of barrage information, source information of the sub-video materials, content description information of the video materials and search information used for indicating search.
For each sub-video material, when the information editing information of the sub-video material is obtained, the information editing information corresponding to the sub-video material may be determined according to the material information corresponding to the sub-video material and/or the attribute information of different users.
And the material information corresponding to the sub-video material comprises at least one of a material type, a scene type and target object information in the material.
The material type is used for representing the attribute type of the sub-video material, and may include, for example, a movie drama type, a variety type, a reality show type, a news type, and the like; the scene type is used for representing a scene corresponding to the sub-video material, and may include a restaurant, an office, a park, a supermarket, and the like, for example; the target object information in the sub-video material may include clothing information, furniture information, flower information, and the like.
The user attribute information can comprise information such as the age, the sex and the occupation of the user, and push video content suitable for different crowds can be generated based on different user attribute information, so that targeted pushing of information editing information can be performed.
In a possible implementation manner, when the information editing information of the sub-video material is determined according to the material information corresponding to the sub-video material and/or the different user attribute information, the information editing information of the sub-video material may be determined according to the material information corresponding to the sub-video material and/or the corresponding relationship between the different user attribute information and the information editing information, respectively.
When the information editing information is determined according to the material information corresponding to the sub-video material, the information editing information corresponding to the sub-video material can be searched based on the preset mapping relationship between the material information and the information editing information after the material information corresponding to the sub-video material is determined.
For example, if the material type of the sub video material is a movie, the information editing information corresponding to the sub video material may be content description information of the video material; if the material type of the sub-video material is the variety type, the information editing information corresponding to the sub-video material can be barrage information.
When determining the information editing information of the sub video material according to the different user attribute information, the mapping relationship between the different information editing information and the user attribute information can be established in advance, based on the mapping relationship, the information editing information corresponding to the different user attribute information can be searched, and the searched information editing information is used as the information editing information corresponding to the sub video material.
For example, if the user attribute information is female and 20-30 years old, the information editing information corresponding to the user attribute information may include personalized stickers for female, such as cosmetics, bags, and the like.
Here, it should be noted that the same sub video material may correspond to a plurality of information editing information, different information push videos corresponding to the sub video material may be generated based on different information editing information, and different sub video materials may also correspond to the same information editing information.
In one possible implementation, the information editing information corresponding to the sub-video material may be input by the user, and the user may automatically process the sub-video material according to the information editing information after inputting the information editing information.
With respect to step 104:
in a possible implementation manner, when the pushed video content is generated based on the sub-video material and the corresponding information editing information, the display template corresponding to the sub-video material may be obtained first, then the display position information of the sub-video material and the information editing information in the display template is respectively determined according to the display template, and then the sub-video material and the information editing information are added to the display template according to the determined display position information to generate the pushed video content.
The obtaining of the display template corresponding to the sub-video material may be obtaining the display template corresponding to a template selection instruction input by a user after receiving the template selection instruction.
When the display position information of the sub-video material and the information editing information in the display template is respectively determined according to the display template, the size information of a first display area corresponding to the information editing information and the size information of a second display area corresponding to the information editing information can be determined according to the sub-video material and the corresponding information editing information; and then determining the display position information of the sub-video material and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
Different information editing information and size information of the display area required by the sub video material are different, for example, if the information editing information is content description information of the video material, the more content description information of the video material, the larger size information of the display area required by the information editing information is.
The size of the display area required for different sub-video materials may also be different, for example, the size of the display area required for a cross-screen shot of sub-video material and a portrait shot of sub-video material may also be different.
The display template can be provided with a plurality of preset position areas for displaying video materials or information editing information, and when the display position information of the sub-video materials and the information editing information in the display template is determined according to the size information of the first display area and the size information of the second display area, the corresponding display position information of each display area in the display template can be determined according to the size information of the plurality of preset position areas, the size information of the first display area and the size information of the second display area.
In another possible implementation manner, the display position information of the sub-video material and the information editing information in the display template may also be manually input, and the display position of the sub-video material and the information editing information displayed in the display template may be adjusted based on a manually input display position adjustment instruction.
In another possible implementation manner, each information editing information and the sub-video material may have a default display position in the display template, and when the push video content is generated, the push video content may be directly displayed at the default display position in a corresponding manner.
In another possible implementation manner, when the display module corresponding to the sub-video material is obtained, a display template matched with the material information corresponding to the sub-video material may also be obtained. Specifically, the corresponding relationship between different material information and different display templates may be preset, and after determining the material information corresponding to the sub-video material, the display template corresponding to the material information of the sub-video may be obtained according to the corresponding relationship.
Here, the display positions of the different types of information editing information in the display template may be preset; or different sub-video materials of the same material information correspond to the same information editing information, the information editing information displayed at each position in the display template is preset, the display positions of the sub-video materials in the display template are also set, and when the pushed video content is generated, the sub-video materials are directly added into the display positions of the corresponding display template.
In one possible implementation, when generating the pushed video content based on the sub-video materials and the corresponding information editing information, the attribute information of the sub-video materials may be determined, then the target sub-video material may be screened from at least one sub-video material based on the attribute information of the sub-video materials, and then the pushed video content may be generated based on the target sub-video material and the information editing information corresponding to the target sub-video material.
Wherein the attribute information of the sub-video material may include at least one of the following information:
the playing time, the watching times and the number of barrages.
Here, the play time length refers to a total time length of the sub video material played by a plurality of users.
After the pushed video content corresponding to the video material is generated, the generated pushed video content may be pushed to each user side, or the generated pushed video content may be pushed to each user side through a server to be displayed on each user side.
Based on the same concept, an embodiment of the present disclosure further provides a video display method, which is shown in fig. 3, and is a flow diagram of the video display method provided by the present disclosure, including the following steps:
step 301, obtaining a push video content, where the push video content includes a sub video material and corresponding information editing information.
Step 302, playing the pushed video content in a page, and displaying the information editing information on the pushed video content in an overlapping manner based on the editing attribute corresponding to the information editing information.
Step 303, in response to the trigger operation for the pushed video content, skipping from the page to an information page corresponding to the pushed video content.
Wherein the target trigger operation may be any one of the following operations:
single click, double click, long press, and heavy press.
The edit attribute includes at least one of the following information:
display position, display time, display form and display effect.
Wherein, the display time can refer to the time of starting display and the time of finishing display; or may refer to the length of the presentation; the display position may be a position displayed in the push video content, or may be a position displayed in the page, where the display position may include a display position corresponding to each information editing information, and the display positions corresponding to different information editing information may be different; the display form can be static display, dynamic display and the like; the presentation effect may refer to an effect of a superimposed presentation, and may exemplarily include a presentation effect, a transformation effect, a disappearance effect, and the like.
The information editing information is displayed in an overlapping manner on the pushed video content based on the editing attribute corresponding to the information editing information, and for example, the information editing information may be displayed in the display form and the display effect at the display position within the display time.
In one possible application scenario, the pushed video content may be video content originated from target application software, and the information page corresponding to the pushed video content may be a download page of the target application software or a source page corresponding to the pushed video content. After detecting the trigger operation aiming at the push video content, jumping to a download page of the target application software from a page showing the push video content.
Illustratively, a page presenting push video content may be as shown in fig. 4, including source information for video material, content description information for video material, and search information for indicating a search.
In practical applications, the information editing information may also be related to the target application software corresponding to the information editing information, for example, the information editing information may include a logo of the target application software.
According to the video display method and the video editing method, scene identification can be carried out on video materials, the video materials are divided into at least one sub-video material corresponding to different video scenes, and then information pushing information is automatically generated based on the sub-video materials and corresponding information editing information.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a video display apparatus corresponding to the video display method is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the video display method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 5, there is shown a schematic structural diagram of a video display apparatus according to an embodiment of the present disclosure, the apparatus includes: a first obtaining module 501, a display module 502 and a response module 503; wherein the content of the first and second substances,
a first obtaining module 501, configured to obtain a pushed video content, where the pushed video content includes a sub-video material and corresponding information editing information;
a display module 502, configured to play the pushed video content in a page, and display the information editing information in an overlapping manner on the pushed video content based on an editing attribute corresponding to the information editing information;
a response module 503, configured to jump from the page to an information page corresponding to the pushed video content in response to a trigger operation for the pushed video content.
In one possible embodiment, the information editing information includes at least one of the following information:
the method comprises the following steps of providing barrage information, source information of a sub-video material, content description information of the video material and search information for indicating search;
the edit attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, the pushing an information page corresponding to the video content includes: and the source page corresponding to the pushed video content or the download page of the target application program corresponding to the pushed video content.
Based on the same inventive concept, a video editing apparatus corresponding to the video editing method is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the video editing method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 6, a schematic diagram of an architecture of a video editing apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a second obtaining module 601, a determining module 602, a third obtaining module 603, and a generating module 604; wherein the content of the first and second substances,
a second obtaining module 601, configured to obtain a video material;
a determining module 602, configured to perform scene identification on an obtained video material, and split the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations;
a third obtaining module 603, configured to obtain information editing information of the sub-video material;
a generating module 604, configured to generate push video content based on the sub-video material and the corresponding information editing information.
In a possible implementation manner, the third obtaining module 603, when obtaining the information editing information of the sub-video material, is configured to:
determining information editing information of the sub-video materials according to material information corresponding to the sub-video materials and/or different user attribute information; the material information comprises at least one of a material type, a scene type and target object information in the material.
In a possible implementation manner, the third obtaining module 603, when determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the different user attribute information, is configured to:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between the different user attribute information and the information editing information.
In a possible implementation manner, the determining module 602, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, is configured to:
sampling the video material to obtain a plurality of sampled video frames;
determining color information of each pixel point in each sampling video frame aiming at each sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining the segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible embodiment, the color information of the pixel point includes first color information and/or second color information;
the first color information comprises values of the pixel points on a red channel, a green channel and a blue channel respectively; the second color information comprises hue, saturation and brightness.
In one possible implementation, the determining module 602, when determining the slicing time point of the video material based on the color mean of each sampled video frame, is configured to:
determining a segmented video frame based on a difference value between color mean values of adjacent sampled video frames;
and taking the time point of the segmentation video frame corresponding to the video material as the segmentation time point of the video material.
In a possible implementation manner, the determining module 602, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, is configured to:
acquiring interactive information aiming at the video material information;
determining at least one sub-video material from the video materials based on the interaction information.
In a possible implementation, the determining module 602, when determining at least one sub-video material from the video materials based on the interaction information, is configured to:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the presence of a gas in the gas,
and detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on an interaction timestamp of the target interaction information.
In a possible implementation manner, the determining module 602, when performing scene recognition on the obtained video material and splitting the video material into at least one sub-video material, is configured to:
acquiring interactive data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target timestamps based on the interactive data respectively corresponding to the plurality of playing schedules;
and splitting the video material according to the at least one pair of target timestamps to obtain at least one sub-video material.
In one possible implementation, the generating module 604, when generating the push video content based on the sub video material and the corresponding information editing information, is configured to:
acquiring a display template corresponding to the sub-video material;
respectively determining the display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video material and the information editing information in the display template according to the determined display position information to generate the push video content.
In one possible implementation, the generating module 604, when obtaining the presentation template corresponding to the sub-video material, is configured to:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; alternatively, the first and second electrodes may be,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In a possible implementation manner, the generating module 604, when determining the display position information of the sub video material and the information editing information in the display template according to the display template, is configured to:
according to the sub-video material and the corresponding information editing information, determining size information of a second display area corresponding to the information editing information and size information of a third display area corresponding to the information editing information;
and determining the display position information of the sub-video material and the information editing information in the display template according to the size information of the second display area and the size information of the third display area.
In one possible implementation, the generating module 604, when generating the push video content based on the sub video material and the corresponding information editing information, is configured to:
determining attribute information of the sub-video materials;
screening out a target sub-video material from the at least one sub-video material based on the attribute information of the sub-video material;
and generating the push video content based on the target sub-video material and the information editing information corresponding to the target sub-video material.
In one possible embodiment, the attribute information of the sub-video material includes at least one of the following information:
the playing time, the watching times and the number of barrages.
Based on above-mentioned device, can carry out scene identification to the video material, and divide into the at least one sub video material that different video scenes correspond with the video material, then based on sub video material and the information editing information that corresponds, automatic generation information propelling movement information, this kind of mode has saved the human cost of manual processing video on the one hand, the efficiency of video processing has been improved, on the other hand, every sub video material that corresponds different video scenes in the video can express relatively complete independent plot, consequently, through adding information editing information to every sub video material, can edit the creative material that is applicable to different scenes with the video material, the display effect of video material content and the information editing information that corresponds has been improved.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 7, a schematic structural diagram of a computer device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used to temporarily store operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the computer apparatus 700 is operated, the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the following instructions:
acquiring pushed video content, wherein the pushed video content comprises at least one sub-video material and information editing information corresponding to the sub-video material;
playing the pushed video content in a page, and displaying the information editing information on the pushed video content in an overlapping manner based on the editing attribute corresponding to the information editing information;
and responding to the trigger operation aiming at the pushed video content, and jumping from the page to an information page corresponding to the pushed video content.
In one possible embodiment, the information editing information includes at least one of the following information in the instructions executed by the processor 701:
the method comprises the following steps of providing barrage information, source information of a sub-video material, content description information of the video material and search information for indicating search;
the edit attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, in the instructions executed by the processor 701, the pushing an information page corresponding to the video content includes: and the source page corresponding to the pushed video content or the download page of the target application program corresponding to the pushed video content.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 8, a schematic structural diagram of a computer device 800 provided in the embodiment of the present disclosure includes a processor 801, a memory 802, and a bus 803. The memory 802 is used for storing execution instructions and includes a memory 8021 and an external memory 8022; the memory 8021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 801 and data exchanged with an external storage 8022 such as a hard disk, the processor 801 exchanges data with the external storage 8022 through the memory 8021, and when the computer apparatus 800 operates, the processor 801 communicates with the storage 802 through the bus 803, so that the processor 801 executes the following instructions:
acquiring a video material;
carrying out scene recognition on the obtained video material, and splitting the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations;
acquiring information editing information of the sub-video material;
and generating push video content based on the sub-video materials and the corresponding information editing information.
In one possible embodiment, the instructions executed by the processor 801 for obtaining the information editing information of the sub-video material include:
determining information editing information of the sub-video materials according to material information corresponding to the sub-video materials and/or different user attribute information; the material information comprises at least one of a material type, a scene type and target object information in the material.
In one possible embodiment, the instructions executed by the processor 801 for determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the different user attribute information includes:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between the different user attribute information and the information editing information.
In a possible implementation, the instructions executed by the processor 801 for performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material include:
sampling the video material to obtain a plurality of sampled video frames;
determining color information of each pixel point in each sampling video frame aiming at each sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining the segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible implementation manner, in the instructions executed by the processor 801, the color information of the pixel point includes first color information and/or second color information;
the first color information comprises values of the pixel points on a red channel, a green channel and a blue channel respectively; the second color information comprises hue, saturation and brightness.
In one possible embodiment, the processor 801 executes instructions for determining the slicing time point of the video material based on the color mean of each sampled video frame, including:
determining a segmented video frame based on a difference value between color mean values of adjacent sampled video frames;
and taking the time point of the segmentation video frame corresponding to the video material as the segmentation time point of the video material.
In a possible implementation, the instructions executed by the processor 801 for performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material include:
acquiring interactive information aiming at the video material information;
determining at least one sub-video material from the video materials based on the interaction information.
In one possible implementation, the processor 801 executing the instructions for determining at least one sub-video material from the video materials based on the interaction information includes:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the presence of a gas in the gas,
and detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on an interaction timestamp of the target interaction information.
In a possible implementation, the instructions executed by the processor 801 for performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material include:
acquiring interactive data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target timestamps based on the interactive data respectively corresponding to the plurality of playing schedules;
and splitting the video material according to the at least one pair of target timestamps to obtain at least one sub-video material.
In one possible implementation, the instructions executed by the processor 801 for generating the push video content based on the sub-video materials and the corresponding information editing information include:
acquiring a display template corresponding to the sub-video material;
respectively determining the display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video material and the information editing information in the display template according to the determined display position information to generate the push video content.
In one possible implementation, the instructions executed by the processor 801 for obtaining the presentation template corresponding to the sub-video material include:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; alternatively, the first and second electrodes may be,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In one possible embodiment, the instructions executed by the processor 801 for determining the display position information of the sub-video material and the information editing information in the display template according to the display template respectively include:
according to the sub-video materials and the corresponding information editing information, determining size information of a first display area corresponding to the information editing information and size information of a second display area corresponding to the information editing information;
and determining the display position information of the sub-video material and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
In one possible implementation, the processor 801 executes instructions for generating push video content based on the sub-video material and the corresponding information editing information, including:
determining attribute information of the sub-video materials;
screening out a target sub-video material from the at least one sub-video material based on the attribute information of the sub-video material;
and generating the push video content based on the target sub-video material and the information editing information corresponding to the target sub-video material.
In one possible implementation, the processor 801 executes instructions in which the attribute information of the sub-video material includes at least one of the following information:
the playing time, the watching times and the number of barrages.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the video display method and the video editing method described in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the video display method and the video editing method provided by the embodiments of the present disclosure includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the video display method and the video editing method described in the embodiments of the methods described above, which may be specifically referred to in the embodiments of the methods described above, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (21)

1. A method for video presentation, comprising:
acquiring pushed video content, wherein the pushed video content comprises at least one sub-video material and information editing information corresponding to the sub-video material;
playing the pushed video content in a page, and displaying the information editing information on the pushed video content in an overlapping manner based on the editing attribute corresponding to the information editing information;
and responding to the trigger operation aiming at the pushed video content, and jumping from the page to an information page corresponding to the pushed video content.
2. The method of claim 1, wherein the information editing information comprises at least one of the following information:
the method comprises the following steps of providing barrage information, source information of a sub-video material, content description information of the video material and search information for indicating search;
the edit attribute includes at least one of the following information:
display position, display time, display form and display effect.
3. The method of claim 1, wherein pushing the information page corresponding to the video content comprises: and the source page corresponding to the pushed video content or the download page of the target application program corresponding to the pushed video content.
4. A video editing method, comprising:
acquiring a video material;
carrying out scene recognition on the obtained video material, and splitting the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations;
acquiring information editing information of the sub-video material;
and generating push video content based on the sub-video materials and the corresponding information editing information.
5. The method of claim 4, wherein said obtaining information editing information of said sub-video material comprises:
determining information editing information of the sub-video materials according to material information corresponding to the sub-video materials and/or different user attribute information; the material information comprises at least one of a material type, a scene type and target object information in the material.
6. The method according to claim 5, wherein determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the different user attribute information comprises:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between the different user attribute information and the information editing information.
7. The method of claim 4, wherein the performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material comprises:
sampling the video material to obtain a plurality of sampled video frames;
determining color information of each pixel point in each sampling video frame aiming at each sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining the segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
8. The method according to claim 7, wherein the color information of the pixel point comprises first color information and/or second color information;
the first color information comprises values of the pixel points on a red channel, a green channel and a blue channel respectively; the second color information comprises hue, saturation and brightness.
9. The method of claim 7, wherein determining the slicing time points for the video material based on the color mean of each sampled video frame comprises:
determining a segmented video frame based on a difference value between color mean values of adjacent sampled video frames;
and taking the time point of the segmentation video frame corresponding to the video material as the segmentation time point of the video material.
10. The method of claim 4, wherein the performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material comprises:
acquiring interactive information aiming at the video material information;
determining at least one sub-video material from the video materials based on the interaction information.
11. The method of claim 10, wherein determining at least one sub-video material from the video materials based on the interaction information comprises:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the presence of a gas in the gas,
and detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on an interaction timestamp of the target interaction information.
12. The method of claim 4, wherein the performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material comprises:
acquiring interactive data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target timestamps based on the interactive data respectively corresponding to the plurality of playing schedules;
and splitting the video material according to the at least one pair of target timestamps to obtain at least one sub-video material.
13. The method of claim 4, wherein generating the push video content based on the sub-video material and the corresponding information editing information comprises:
acquiring a display template corresponding to the sub-video material;
respectively determining the display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video material and the information editing information in the display template according to the determined display position information to generate the push video content.
14. The method of claim 13, wherein obtaining a presentation template corresponding to the sub-video material comprises:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; alternatively, the first and second electrodes may be,
and acquiring a display template matched with the material information corresponding to the sub-video material.
15. The method of claim 13, wherein determining the display position information of the sub-video material and the information editing information in the display template according to the display template comprises:
according to the sub-video materials and the corresponding information editing information, determining size information of a first display area corresponding to the information editing information and size information of a second display area corresponding to the information editing information;
and determining the display position information of the sub-video material and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
16. The method of claim 4, wherein generating push video content based on the sub-video material and the corresponding information editing information comprises:
determining attribute information of the sub-video materials;
screening out a target sub-video material from the at least one sub-video material based on the attribute information of the sub-video material;
and generating the push video content based on the target sub-video material and the information editing information corresponding to the target sub-video material.
17. The method of claim 16, wherein the attribute information of the sub-video material comprises at least one of the following information:
the playing time, the watching times and the number of barrages.
18. A video presentation apparatus, comprising:
the first acquisition module is used for acquiring pushed video content, and the pushed video content comprises sub video materials and corresponding information editing information;
the display module is used for playing the pushed video content in a page and displaying the information editing information on the pushed video content in an overlapping manner based on the editing attribute corresponding to the information editing information;
and the response module is used for responding to the triggering operation aiming at the pushed video content and jumping from the page to an information page corresponding to the pushed video content.
19. A video editing apparatus, comprising:
the second acquisition module is used for acquiring the video material;
the determining module is used for carrying out scene identification on the obtained video material and splitting the video material into at least one sub-video material; the sub-video materials and the video scenes have corresponding relations;
the third acquisition module is used for acquiring the information editing information of the sub-video materials;
and the generating module is used for generating push video content based on the sub-video material and the corresponding information editing information.
20. A computer device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when a computer device is running, the machine readable instructions when executed by the processor performing the steps of the video presentation method according to any one of claims 1 to 3 or the steps of the video editing method according to any one of claims 4 to 17.
21. A computer-readable storage medium, having stored thereon a computer program for performing the steps of the video presentation method according to any one of claims 1 to 3 or the steps of the video editing method according to any one of claims 4 to 17 when executed by a processor.
CN202110807225.8A 2021-07-16 2021-07-16 Video display method, video editing method and device Active CN113542818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110807225.8A CN113542818B (en) 2021-07-16 2021-07-16 Video display method, video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110807225.8A CN113542818B (en) 2021-07-16 2021-07-16 Video display method, video editing method and device

Publications (2)

Publication Number Publication Date
CN113542818A true CN113542818A (en) 2021-10-22
CN113542818B CN113542818B (en) 2023-04-25

Family

ID=78099813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110807225.8A Active CN113542818B (en) 2021-07-16 2021-07-16 Video display method, video editing method and device

Country Status (1)

Country Link
CN (1) CN113542818B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100293190A1 (en) * 2009-05-13 2010-11-18 Kaiser David H Playing and editing linked and annotated audiovisual works
CN103763626A (en) * 2013-12-19 2014-04-30 华为软件技术有限公司 Method, device and system for pushing information
CN105448214A (en) * 2015-09-15 2016-03-30 北京合盒互动科技有限公司 Advertisement display method and device of controllable electronic screen
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
JP2019047391A (en) * 2017-09-05 2019-03-22 株式会社Jvcケンウッド Device, method and program for distributing content information with caption
CN109951741A (en) * 2017-12-21 2019-06-28 阿里巴巴集团控股有限公司 Data object information methods of exhibiting, device and electronic equipment
CN110147711A (en) * 2019-02-27 2019-08-20 腾讯科技(深圳)有限公司 Video scene recognition methods, device, storage medium and electronic device
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN111177470A (en) * 2019-12-30 2020-05-19 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment
CN112261472A (en) * 2020-10-19 2021-01-22 上海博泰悦臻电子设备制造有限公司 Short video generation method and related equipment
US20210073551A1 (en) * 2019-09-10 2021-03-11 Ruiwen Li Method and system for video segmentation
CN112689189A (en) * 2020-12-21 2021-04-20 北京字节跳动网络技术有限公司 Video display and generation method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100293190A1 (en) * 2009-05-13 2010-11-18 Kaiser David H Playing and editing linked and annotated audiovisual works
CN103763626A (en) * 2013-12-19 2014-04-30 华为软件技术有限公司 Method, device and system for pushing information
CN105448214A (en) * 2015-09-15 2016-03-30 北京合盒互动科技有限公司 Advertisement display method and device of controllable electronic screen
JP2019047391A (en) * 2017-09-05 2019-03-22 株式会社Jvcケンウッド Device, method and program for distributing content information with caption
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN109951741A (en) * 2017-12-21 2019-06-28 阿里巴巴集团控股有限公司 Data object information methods of exhibiting, device and electronic equipment
CN110147711A (en) * 2019-02-27 2019-08-20 腾讯科技(深圳)有限公司 Video scene recognition methods, device, storage medium and electronic device
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
US20210073551A1 (en) * 2019-09-10 2021-03-11 Ruiwen Li Method and system for video segmentation
CN111177470A (en) * 2019-12-30 2020-05-19 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment
CN112261472A (en) * 2020-10-19 2021-01-22 上海博泰悦臻电子设备制造有限公司 Short video generation method and related equipment
CN112689189A (en) * 2020-12-21 2021-04-20 北京字节跳动网络技术有限公司 Video display and generation method and device

Also Published As

Publication number Publication date
CN113542818B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN108600781B (en) Video cover generation method and server
KR101942987B1 (en) Method, system for removing background of a video, and a computer-readable storage device
CN113115099A (en) Video recording method and device, electronic equipment and storage medium
JP2017520036A (en) Mosaic image generation method and apparatus
CN112069341A (en) Background picture generation and search result display method, device, equipment and medium
CN110889379A (en) Expression package generation method and device and terminal equipment
CN111651047A (en) Virtual object display method and device, electronic equipment and storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN113128437A (en) Identity recognition method and device, electronic equipment and storage medium
CN113542818A (en) Video display method, video editing method and device
CN110019877A (en) Image search method, apparatus and system, terminal
CN114936896A (en) Commodity information display method and device, computer equipment and storage medium
US20140092261A1 (en) Techniques for generating an electronic shopping list
CN114245193A (en) Display control method and device and electronic equipment
CN113365145A (en) Video processing method, video playing method, video processing device, video playing device, computer equipment and storage medium
CN114067084A (en) Image display method and device
CN114363528A (en) Dial generation method and device, terminal device and computer readable storage medium
KR20220053358A (en) Method and apparatus for analyzing sentiment related to object
CN113297405A (en) Data processing method and system, computer readable storage medium and processing device
CN111625101A (en) Display control method and device
CN111665947A (en) Treasure box display method and device, electronic equipment and storage medium
CN105677696A (en) Retrieval apparatus and retrieval method
CN110996173A (en) Image data processing method and device and storage medium
CN114666657B (en) Video editing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.