CN113542818B - Video display method, video editing method and device - Google Patents

Video display method, video editing method and device Download PDF

Info

Publication number
CN113542818B
CN113542818B CN202110807225.8A CN202110807225A CN113542818B CN 113542818 B CN113542818 B CN 113542818B CN 202110807225 A CN202110807225 A CN 202110807225A CN 113542818 B CN113542818 B CN 113542818B
Authority
CN
China
Prior art keywords
information
video
sub
video material
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110807225.8A
Other languages
Chinese (zh)
Other versions
CN113542818A (en
Inventor
宋旸
白刚
黄鑫
徐祯辉
肖洋
邢欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110807225.8A priority Critical patent/CN113542818B/en
Publication of CN113542818A publication Critical patent/CN113542818A/en
Application granted granted Critical
Publication of CN113542818B publication Critical patent/CN113542818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The disclosure provides a video display method, a video editing method and a device, comprising the following steps: acquiring video materials; performing scene recognition on the acquired video material, and splitting the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene; acquiring information editing information of the sub video material; and generating push video content based on the sub video material and the corresponding information editing information.

Description

Video display method, video editing method and device
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a video display method, a video editing method and a video editing device.
Background
In the related art, when pushing information, a pushing manner is to push based on each video. The method generally comprises the steps of shooting videos by professional staff, adding information to the shot videos, and sending the videos added with the information to each user side so as to realize pushing of the information.
However, in the related art, when shooting a video and adding information to the video, the video is generally processed manually, and the processing mode of the video is inefficient.
Disclosure of Invention
The embodiment of the disclosure at least provides a video display method, a video editing method and a video editing device.
In a first aspect, an embodiment of the present disclosure provides a video display method, including:
obtaining push video content, wherein the push video content comprises at least one sub video material and information editing information corresponding to the sub video material;
playing the push video content in a page, and superposing and displaying the information editing information on the push video content based on the editing attribute corresponding to the information editing information;
and responding to the triggering operation for the push video content, and jumping from the page to an information page corresponding to the push video content.
In a possible embodiment, the information editing information includes at least one of the following information:
bullet screen information, source information of sub video materials, content description information of the video materials and search information for indicating search;
the editing attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, the information page corresponding to the push video content includes: and the source page corresponding to the push video content or the download page of the target application program corresponding to the push video content.
In a second aspect, an embodiment of the present disclosure provides a video editing method, including:
acquiring video materials;
performing scene recognition on the acquired video material, and splitting the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene;
acquiring information editing information of the sub video material;
and generating push video content based on the sub video material and the corresponding information editing information.
In a possible implementation manner, the obtaining information editing information of the sub-video material includes:
determining information editing information of the sub-video materials according to the material information corresponding to the sub-video materials and/or different user attribute information; wherein the material information includes at least one of a material type, a scene type, and target object information in the material.
In a possible implementation manner, the determining information editing information of the sub video material according to material information corresponding to the sub video material and/or different user attribute information includes:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between different user attribute information and the information editing information.
In a possible implementation manner, the scene recognition is performed on the acquired video material, and the video material is split into at least one sub-video material, which includes:
sampling the video material to obtain a plurality of sampled video frames;
for each sampling video frame, determining color information of each pixel point in the sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining a segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible implementation manner, the color information of the pixel point includes first color information and/or second color information;
wherein the first color information comprises values of the pixel points on red, green and blue three channels respectively; the second color information includes hue, saturation, brightness.
In a possible implementation manner, determining a slicing time point of the video material based on a color average value of each sampled video frame includes:
determining a sliced video frame based on a difference value between color means of adjacent sampled video frames;
And taking the corresponding time point of the segmentation video frame in the video material as the segmentation time point of the video material.
In a possible implementation manner, the scene recognition is performed on the acquired video material, and the video material is split into at least one sub-video material, which includes:
acquiring interaction information aiming at the video material information;
at least one sub-video material is determined from the video material based on the interaction information.
In a possible implementation manner, the determining at least one sub-video material from the video materials based on the interaction information includes:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the number of the groups of groups,
target interaction information containing preset target keywords is detected, and at least one sub-video material is determined from the video materials based on the interaction time stamp of the target interaction information.
In a possible implementation manner, the scene recognition is performed on the acquired video material, and the video material is split into at least one sub-video material, which includes:
Acquiring interaction data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target time stamps based on the interaction data respectively corresponding to the plurality of playing progress;
and splitting the video material according to the at least one pair of target time stamps to obtain at least one sub-video material.
In a possible implementation manner, the generating the push video content based on the sub video material and the corresponding information editing information includes:
acquiring a display template corresponding to the sub video material;
respectively determining display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video materials and the information editing information into the display template according to the determined display position information, and generating the push video content.
In a possible implementation manner, the obtaining a presentation template corresponding to the sub-video material includes:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; or alternatively, the process may be performed,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In a possible implementation manner, according to the display template, display position information of the sub-video material and the information editing information in the display template is respectively determined, and the method includes:
determining the size information of a first display area corresponding to the information editing information and the size information of a second display area corresponding to the information editing information according to the sub video material and the corresponding information editing information;
and determining the display position information of the sub video materials and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
In a possible implementation manner, the generating push video content based on the sub video material and the corresponding information editing information includes:
determining attribute information of the sub video material;
screening target sub-video materials from the at least one sub-video material based on the attribute information of the sub-video materials;
and generating the push video content based on the target sub-video material and information editing information corresponding to the target sub-video material.
In a possible implementation manner, the attribute information of the sub-video material includes at least one of the following information:
The playing time length, the watching times and the bullet screen number.
In a third aspect, embodiments of the present disclosure provide a video display apparatus, including:
the first acquisition module is used for acquiring push video content, wherein the push video content comprises sub video materials and corresponding information editing information;
the display module is used for playing the push video content in a page, and displaying the information editing information in a superposition manner on the push video content based on editing attributes corresponding to the information editing information;
and the response module is used for responding to the triggering operation for the push video content and jumping from the page to the information page corresponding to the push video content.
In a possible embodiment, the information editing information includes at least one of the following information:
bullet screen information, source information of sub video materials, content description information of the video materials and search information for indicating search;
the editing attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, the information page corresponding to the push video content includes: and the source page corresponding to the push video content or the download page of the target application program corresponding to the push video content.
In a fourth aspect, an embodiment of the present disclosure provides a video editing apparatus, including:
the second acquisition module is used for acquiring video materials;
the determining module is used for carrying out scene recognition on the obtained video material and splitting the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene;
the third acquisition module is used for acquiring information editing information of the sub-video materials;
and the generation module is used for generating push video content based on the sub video materials and the corresponding information editing information.
In a possible implementation manner, the third obtaining module is configured to, when obtaining information editing information of the sub video material:
determining information editing information of the sub-video materials according to the material information corresponding to the sub-video materials and/or different user attribute information; wherein the material information includes at least one of a material type, a scene type, and target object information in the material.
In a possible implementation manner, the third obtaining module is configured to, when determining information editing information of the sub-video material according to material information corresponding to the sub-video material and/or different user attribute information:
And determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between different user attribute information and the information editing information.
In a possible implementation manner, the determining module is configured to, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material:
sampling the video material to obtain a plurality of sampled video frames;
for each sampling video frame, determining color information of each pixel point in the sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining a segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible implementation manner, the color information of the pixel point includes first color information and/or second color information;
wherein the first color information comprises values of the pixel points on red, green and blue three channels respectively; the second color information includes hue, saturation, brightness.
In a possible implementation manner, the determining module is configured to, when determining a slicing time point of the video material based on a color average value of each sampled video frame:
determining a sliced video frame based on a difference value between color means of adjacent sampled video frames;
and taking the corresponding time point of the segmentation video frame in the video material as the segmentation time point of the video material.
In a possible implementation manner, the determining module is configured to, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material:
acquiring interaction information aiming at the video material information;
at least one sub-video material is determined from the video material based on the interaction information.
In a possible implementation manner, the determining module is configured to, when determining at least one sub-video material from the video materials based on the interaction information:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the number of the groups of groups,
target interaction information containing preset target keywords is detected, and at least one sub-video material is determined from the video materials based on the interaction time stamp of the target interaction information.
In a possible implementation manner, the determining module is configured to, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material:
acquiring interaction data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target time stamps based on the interaction data respectively corresponding to the plurality of playing progress;
and splitting the video material according to the at least one pair of target time stamps to obtain at least one sub-video material.
In a possible implementation manner, the generating module is configured to, when generating the push video content based on the sub video material and the corresponding information editing information:
acquiring a display template corresponding to the sub video material;
respectively determining display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video materials and the information editing information into the display template according to the determined display position information, and generating the push video content.
In a possible implementation manner, the generating module is configured to, when acquiring a presentation template corresponding to the sub-video material:
Responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; or alternatively, the process may be performed,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In a possible implementation manner, the generating module is configured to, when determining, according to the display template, display position information of the sub-video material and the information editing information in the display template, respectively:
determining the size information of a second display area corresponding to the information editing information and the size information of a third display area corresponding to the information editing information according to the sub video material and the corresponding information editing information;
and determining the display position information of the sub-video materials and the information editing information in the display template according to the size information of the second display area and the size information of the third display area.
In a possible implementation manner, the generating module is configured to, when generating push video content based on the sub video material and the corresponding information editing information:
determining attribute information of the sub video material;
screening target sub-video materials from the at least one sub-video material based on the attribute information of the sub-video materials;
And generating the push video content based on the target sub-video material and information editing information corresponding to the target sub-video material.
In a possible implementation manner, the attribute information of the sub-video material includes at least one of the following information:
the playing time length, the watching times and the bullet screen number.
In a fifth aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect, or the steps of the second aspect.
In a sixth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect, or performs the steps as in the second aspect.
According to the video display method, the video editing method and the video editing device, scene recognition can be carried out on video materials, the video materials are divided into at least one sub-video material corresponding to different video scenes, information pushing information is automatically generated based on the sub-video materials and corresponding information editing information, on one hand, labor cost for manually processing videos is saved, video processing efficiency is improved, on the other hand, each sub-video material corresponding to different video scenes in the video can express relatively complete independent scenario, therefore, by adding the information editing information to each sub-video material, the video materials can be edited into creative materials suitable for different scenes, and display effects of video material content and corresponding information editing information are improved.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a video editing method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a sub-video material determination method provided by an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a video display method according to an embodiment of the disclosure;
FIG. 4 shows a schematic diagram of a page exhibiting push video content provided by an embodiment of the present disclosure;
FIG. 5 illustrates a schematic architecture of a video display device provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a video editing apparatus according to an embodiment of the disclosure;
FIG. 7 illustrates a schematic diagram of a computer device 700 provided by an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of a computer device 800 provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
In the related art, the editing of the video and the adding process of the information editing information are completed manually, and the processing mode of the video is low in efficiency.
Based on the above, the present disclosure provides a video display method, a video editing method and a video editing device, which can perform scene recognition on video materials, segment the video materials into at least one sub-video material corresponding to different video scenes, and then automatically generate information push information based on the sub-video material and corresponding information editing information.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a video editing method disclosed in an embodiment of the present disclosure, where an execution subject of the video editing method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the video editing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a video editing method according to an embodiment of the disclosure is shown, where the method includes steps 101 to 104, where:
And step 101, acquiring video materials.
102, performing scene recognition on the obtained video material, and splitting the video material into at least one sub-video material; the sub video materials have a corresponding relation with the video scene.
And step 103, obtaining information editing information of the sub-video material.
And 104, generating push video content based on the sub video materials and the corresponding information editing information.
The following is a detailed description of the above steps 101 to 104.
For step 101:
the video material acquisition may be to acquire video material input by a user or to acquire video material pre-stored in a database from a local database. In one possible implementation, the video material may also be obtained from a cloud server.
For step 102:
in a specific implementation, when performing scene recognition on an obtained video material and splitting the video material into at least one sub-video material, reference may be made to a method as shown in fig. 2, which includes the following steps:
step 201, sampling the video material to obtain a plurality of sampled video frames.
The video material may include a plurality of video frames, in order to improve processing efficiency, the plurality of video frames included in the video material may be sampled, for example, the sampling may be performed at intervals of a preset time interval, so as to obtain a plurality of sampled video frames, where the length of the preset time interval may be dynamically adjusted according to different video materials.
Step 202, for each sampled video frame, determining color information of each pixel point in the sampled video frame.
The color information of each pixel point in the sampling video frame comprises first color information and/or second color information, the first color information comprises values of the pixel points on red, green and blue channels respectively, and the second color information comprises hue, saturation and brightness.
For example, if the sampled video frame includes m×n pixels, and the color information of each pixel includes the value of the pixel on the red, green, and blue three channels, then for each pixel in the sampled video frame, the value of the pixel on the red, green, and blue three channels needs to be determined.
Step 203, calculating the average value of the color information in the sampled video frame to obtain a color average value.
The color information of each pixel point in the sampling video frame comprises a plurality of values, when the color average value is calculated, the values of the color information corresponding to the pixel point can be subjected to average value calculation for each pixel point to obtain the pixel color average value corresponding to the pixel point, and the pixel color average value corresponding to the pixel points in the sampling video frame is subjected to average value calculation for each sampling video frame to obtain the color average value corresponding to the sampling video frame.
For example, if the sampled video frame includes 1024×1024 pixels, the color information corresponding to each pixel includes hue, saturation, and brightness, for each pixel, the hue, saturation, and brightness corresponding to the pixel are summed, divided by 3, and the average value is calculated to obtain the pixel color average value corresponding to the pixel, and then the 1024×1024 pixels are summed, divided by 1024×1024, and the color average value corresponding to the sampled video frame is obtained.
Step 204, determining a segmentation time point of the video material based on the color mean value of each sampled video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
The color average values of the sampled video frames in the same scene are similar, so that the sampled video frames in the same scene in the video material can be identified based on the color average values of the sampled video frames.
In one possible implementation manner, when determining the slicing time point of the video material, the slicing video frame may be determined based on a difference value between color average values of adjacent sampled video frames, and then the corresponding time point of the slicing video frame in the video material is taken as the slicing time point of the video material.
In a specific implementation, if the difference value of the color mean value between any two adjacent sampling video frames is greater than the preset difference value, the video frame before the corresponding time point in the video material in the two adjacent sampling video frames can be used as the segmentation video frame, and the video frame which appears in the video material first is used as the segmentation video frame.
For example, if the video frame a and the video frame B are two adjacent sampled video frames, the video frame a appears in the video material earlier than the video frame B, and if the difference value between the color average values of the video frame a and the video frame B is greater than the preset difference value, the corresponding time point of the video frame a in the video material may be taken as the slicing time point of the video material.
Here, it should be noted that the same video material may include at least one scenario, for example, the video material may include only an office scenario, or may include an office scenario, a restaurant scenario, an outdoor scenario, etc., so that each video material has at least one corresponding slicing time point and at least one corresponding sub-video material.
In another possible implementation manner, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material, the following steps may be performed:
A. And acquiring interaction information aiming at the video material information.
B. At least one sub-video material is determined from the video material based on the interaction information.
Here, the interactive information may include at least one of a bullet screen, a comment, a praise, a gift, and the like, by way of example. In determining at least one sub-video material from the video materials based on the interaction information, any one or more of the following methods may be used:
the method B1 comprises the steps of determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information.
And B2, detecting target interaction information containing preset target keywords, and determining at least one sub-video material from the video materials based on the interaction time stamp of the target interaction information.
Here, the interaction timestamp is a timestamp of the video material information when the server receives interaction information sent by other users in the playing process, and the timestamp may be a timestamp of the relative video material information, for example, bullet screen information received at the 17 th second of playing of the video material information.
In a possible implementation manner, for the method B1, when determining at least one sub-video material from the video materials based on the interaction timestamp corresponding to the interaction information, the sub-video material may be divided into a plurality of play time intervals, for example, one second may be a play time interval, and then, based on the interaction timestamp corresponding to the interaction information, the heat information corresponding to each play time interval is determined, where the heat information is used to represent the attention degree of other users to the play content corresponding to the play time interval; and then taking the videos corresponding to the playing time intervals, of which the continuous N pieces of corresponding heat information meet the preset conditions, in the videos corresponding to the video material information as sub-video materials of the video material information, wherein N is a positive integer.
Here, the heat information may be the number of the interaction information, or different heat values may be given to different interaction information, and the heat value of the interaction information including the preset keyword is higher, so that the heat information corresponding to each play time interval may be determined by summing the heat values of the interaction information corresponding to the play time interval.
In a possible implementation manner, for the method B2, the at least one sub-video material is determined from the video materials based on the interaction timestamp of the target interaction information, which may be understood as an example, a time interval in which the occurrence frequency of the target interaction information is higher than a preset frequency is determined based on the interaction timestamp of the target interaction information, and then a sub-video corresponding to the time interval in the video material is taken as the sub-video material.
Or in another possible implementation manner, when scene recognition is performed on the obtained video material and the video material is split into at least one sub-video material, interactive data corresponding to the video material under a plurality of playing schedules can be obtained first, then at least one pair of target time stamps is determined based on the interactive data corresponding to the video material under the plurality of playing schedules, and then the video material is split according to the at least one pair of target time stamps to obtain at least one sub-video material.
Here, the interactive data corresponding to the video material under the multiple playing schedules may be understood as the number of people watching corresponding to the video material under the multiple playing schedules; the determining of at least one pair of target time stamps based on the interaction data corresponding to the playing progress may be understood as determining a playing interval in which the number of people is greater than a preset number of people based on the number of people being watched corresponding to the playing progress, where the time stamp corresponding to the playing interval is the target time stamp; the splitting the video material according to the at least one pair of target time stamps may be understood as taking a sub-video corresponding to the pair of target time stamps in the video material as a sub-video material.
In one possible implementation, if the target timestamps corresponding to the two sub-video materials are adjacent, the two sub-video materials may be combined to form one sub-video material.
For step 103:
wherein, the information editing information of the sub video material may include at least one of the following information:
bullet screen information, source information of sub-video materials, content description information of the video materials and search information for indicating search.
For each sub-video material, when the information editing information of the sub-video material is acquired, the information editing information corresponding to the sub-video material can be determined according to the material information corresponding to the sub-video material and/or different user attribute information.
The material information corresponding to the sub video material comprises at least one of a material type, a scene type and target object information in the material.
The material type is used for representing the attribute type of the sub video material, and can comprise film and television drama types, variety types, real person drama types, news types and the like; the scene type is used for representing scenes corresponding to the sub-video materials and can comprise restaurants, offices, parks, supermarkets and the like; the target object information in the sub-video material may include apparel information, furniture information, flower information, and the like.
The user attribute information can comprise information such as age, gender, occupation and the like of the user, and based on different user attribute information, push video content suitable for different crowds can be generated, so that targeted pushing of information editing information can be performed.
In a possible implementation manner, when determining the information editing information of the sub video material according to the material information corresponding to the sub video material and/or different user attribute information, the information editing information of the sub video material may be determined according to the material information corresponding to the sub video material and/or the correspondence between different user attribute information and the information editing information.
When the information editing information is determined according to the material information corresponding to the sub-video material, the information editing information corresponding to the sub-video material can be searched based on the mapping relation between the preset material information and the information editing information after the material information corresponding to the sub-video material is determined.
For example, if the material type of the sub video material is a movie and television series, the information editing information corresponding to the sub video material may be content description information of the video material; if the material type of the sub-video material is a variety, the information editing information corresponding to the sub-video material may be bullet screen information.
When information editing information of the sub video material is determined according to different user attribute information, a mapping relation between the different information editing information and the user attribute information can be established in advance, the information editing information corresponding to the different user attribute information can be searched based on the mapping relation, and the searched information editing information is used as the information editing information corresponding to the sub video material.
For example, if the user attribute information is female, 20 to 30 years old, the information editing information corresponding to the user attribute information may include a personalized sticker for female, for example, a cosmetic, a pack, or the like.
Here, it should be noted that the same sub-video material may correspond to a plurality of information editing information, different information pushing videos corresponding to the sub-video material may be generated based on different information editing information, and different sub-video materials may also correspond to the same information editing information.
In one possible implementation, the information editing information corresponding to the sub-video material may be input by a user, and after the information editing information is input by the user, the sub-video material may be automatically processed according to the information editing information.
For step 104:
in one possible implementation manner, when generating push video content based on the sub video material and the corresponding information editing information, a display template corresponding to the sub video material may be obtained first, then display position information of the sub video material and the information editing information in the display template is determined according to the display template, and then the sub video material and the information editing information are added in the display template according to the determined display position information, so as to generate the push video content.
The obtaining the display template corresponding to the sub-video material may be obtaining the display template corresponding to the template selection instruction after receiving the template selection instruction input by the user.
When the display position information of the sub-video material and the information editing information in the display template is respectively determined according to the display template, the size information of a first display area corresponding to the information editing information and the size information of a second display area corresponding to the information editing information can be determined according to the sub-video material and the corresponding information editing information; and then determining the display position information of the sub-video materials and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
The different information editing information is different from the size information of the display area required by the sub-video material, for example, if the information editing information is the content description information of the video material, the more the content description information of the video material is, the larger the size information of the display area required by the video material is.
The size of the display area required for different sub-video materials may also be different, for example, the size of the display area required for a sub-video material photographed by a horizontal screen and a sub-video material photographed by a vertical screen may also be different.
The display template can be provided with a plurality of preset position areas for displaying video materials or information editing information, and when the display position information of the sub video materials and the information editing information in the display template is determined according to the size information of the first display area and the size information of the second display area, the corresponding display position information of each display area in the display template can be determined according to the size information of the plurality of preset position areas, the size information of the first display area and the size information of the second display area.
In another possible implementation manner, the display position information of the sub-video materials and the information editing information in the display template can be manually input, and the display position of the sub-video materials and the information editing information displayed in the display template can be adjusted based on the manually input display position adjustment instruction.
In another possible implementation manner, each information editing information and sub-video material may have a default display position in the display template, and when the push video content is generated, the push video content may be displayed correspondingly at the default display position.
In another possible implementation manner, when the display module corresponding to the sub-video material is acquired, a display template matched with the material information corresponding to the sub-video material may also be acquired. Specifically, the corresponding relation between different material information and different display templates can be preset, and after the material information corresponding to the sub-video material is determined, the display template corresponding to the material information of the sub-video can be obtained according to the corresponding relation.
Here, the display positions of the different types of information editing information in the display template may be preset; or, different sub video materials of the same material information correspond to the same information editing information, the information editing information displayed at each position in the display template is preset, the display position of the sub video material in the display template is also set, and when the push video content is generated, the sub video material is directly added into the display position of the corresponding display template.
In one possible implementation, when generating the push video content based on the sub video material and the corresponding information editing information, the attribute information of the sub video material may be determined first, then the target sub video material is selected from at least one sub video material based on the attribute information of the sub video material, and then the push video content is generated based on the target sub video material and the information editing information corresponding to the target sub video material.
Wherein, the attribute information of the sub video material may include at least one of the following information:
the playing time length, the watching times and the bullet screen number.
Here, the play duration refers to the total duration of the sub-video material played by a plurality of users.
After the push video content corresponding to the video material is generated, the generated push video content can be pushed to each user side, or the generated push video content is pushed to each user side through a server, so that the push video content is displayed on each user side.
Based on the same concept, the embodiment of the disclosure further provides a video display method, referring to fig. 3, which is a schematic flow chart of the video display method provided by the disclosure, and includes the following steps:
Step 301, obtaining push video content, wherein the push video content comprises sub video materials and corresponding information editing information.
And 302, playing the push video content in a page, and displaying the information editing information in a superposition manner on the push video content based on the editing attribute corresponding to the information editing information.
Step 303, responding to the triggering operation for the push video content, and jumping from the page to an information page corresponding to the push video content.
Wherein the target trigger operation may be any one of the following operations:
single click, double click, long press, heavy press.
The editing attribute includes at least one of the following information:
display position, display time, display form and display effect.
Wherein, the display time can refer to the time of starting display and the time of ending display; or may refer to a presentation duration; the display position may refer to a position displayed in the push video content, or may refer to a position displayed in the page, where the display position may include a display position corresponding to each information editing information, and display positions corresponding to different information editing information may be different; the display form can refer to static display, dynamic display and the like; the display effect may refer to an effect of the superimposed display, and examples may include a departure effect, a transformation effect, a disappearance effect, and the like.
The information editing information is displayed on the push video content in a superimposed manner based on the editing attribute corresponding to the information editing information, and the information editing information may be displayed in the display form and the display effect in the display position in the display time.
In one possible application scenario, the push video content may be video content derived from target application software, and the information page corresponding to the push video content may be a download page of the target application software, or a source page corresponding to the push video content. After detecting a trigger operation for the push video content, a page exhibiting push video content may be jumped to a download page of the target application software.
For example, a page showing push video content may include source information of video material, content description information of the video material, and search information for indicating a search, as shown in fig. 4.
In practical applications, the information editing information may also relate to the target application software corresponding to the information editing information, for example, the information editing information may include a logo of the target application software.
According to the video display method and the video editing method, scene recognition can be carried out on video materials, the video materials are segmented into at least one sub-video material corresponding to different video scenes, then information pushing information is automatically generated based on the sub-video materials and corresponding information editing information, on one hand, labor cost for manually processing videos is saved, video processing efficiency is improved, on the other hand, each sub-video material corresponding to different video scenes in the videos can express relatively complete independent scenario, therefore, by adding the information editing information to each sub-video material, the video materials can be edited into creative materials suitable for different scenes, and display effects of video material content and corresponding information editing information are improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a video display device corresponding to the video display method, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to that of the video display method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 5, a schematic architecture diagram of a video display apparatus according to an embodiment of the disclosure is provided, where the apparatus includes: a first obtaining module 501, a display module 502 and a response module 503; wherein, the liquid crystal display device comprises a liquid crystal display device,
a first obtaining module 501, configured to obtain push video content, where the push video content includes sub video materials and corresponding information editing information;
the display module 502 is configured to play the push video content in a page, and superimpose and display the information editing information on the push video content based on an editing attribute corresponding to the information editing information;
and the response module 503 is configured to skip from the page to an information page corresponding to the push video content in response to a trigger operation for the push video content.
In a possible embodiment, the information editing information includes at least one of the following information:
bullet screen information, source information of sub video materials, content description information of the video materials and search information for indicating search;
the editing attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, the information page corresponding to the push video content includes: and the source page corresponding to the push video content or the download page of the target application program corresponding to the push video content.
Based on the same inventive concept, the embodiments of the present disclosure further provide a video editing device corresponding to the video editing method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the video editing method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 6, an architecture diagram of a video editing apparatus according to an embodiment of the disclosure is provided, where the apparatus includes: a second acquisition module 601, a determination module 602, a third acquisition module 603, and a generation module 604; wherein, the liquid crystal display device comprises a liquid crystal display device,
a second obtaining module 601, configured to obtain video materials;
a determining module 602, configured to perform scene recognition on the obtained video material, and split the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene;
a third obtaining module 603, configured to obtain information editing information of the sub video material;
a generating module 604, configured to generate push video content based on the sub video material and the corresponding information editing information.
In a possible implementation manner, the third obtaining module 603 is configured to, when obtaining the information editing information of the sub video material:
Determining information editing information of the sub-video materials according to the material information corresponding to the sub-video materials and/or different user attribute information; wherein the material information includes at least one of a material type, a scene type, and target object information in the material.
In a possible implementation manner, the third obtaining module 603 is configured to, when determining information editing information of the sub-video material according to material information corresponding to the sub-video material and/or different user attribute information:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between different user attribute information and the information editing information.
In a possible implementation manner, the determining module 602 is configured to, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material:
sampling the video material to obtain a plurality of sampled video frames;
for each sampling video frame, determining color information of each pixel point in the sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
And determining a segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible implementation manner, the color information of the pixel point includes first color information and/or second color information;
wherein the first color information comprises values of the pixel points on red, green and blue three channels respectively; the second color information includes hue, saturation, brightness.
In a possible implementation manner, the determining module 602 is configured to, when determining a slicing time point of the video material based on a color average value of each sampled video frame:
determining a sliced video frame based on a difference value between color means of adjacent sampled video frames;
and taking the corresponding time point of the segmentation video frame in the video material as the segmentation time point of the video material.
In a possible implementation manner, the determining module 602 is configured to, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material:
acquiring interaction information aiming at the video material information;
At least one sub-video material is determined from the video material based on the interaction information.
In a possible implementation manner, the determining module 602 is configured to, when determining at least one sub-video material from the video materials based on the interaction information:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the number of the groups of groups,
target interaction information containing preset target keywords is detected, and at least one sub-video material is determined from the video materials based on the interaction time stamp of the target interaction information.
In a possible implementation manner, the determining module 602 is configured to, when performing scene recognition on the acquired video material and splitting the video material into at least one sub-video material:
acquiring interaction data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target time stamps based on the interaction data respectively corresponding to the plurality of playing progress;
and splitting the video material according to the at least one pair of target time stamps to obtain at least one sub-video material.
In a possible implementation manner, the generating module 604 is configured, when generating the push video content based on the sub video material and the corresponding information editing information, to:
acquiring a display template corresponding to the sub video material;
respectively determining display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video materials and the information editing information into the display template according to the determined display position information, and generating the push video content.
In a possible implementation manner, the generating module 604 is configured to, when acquiring a presentation template corresponding to the sub-video material:
responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; or alternatively, the process may be performed,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In a possible implementation manner, the generating module 604 is configured to, when determining, according to the display template, display position information of the sub-video material and the information editing information in the display template, respectively:
determining the size information of a second display area corresponding to the information editing information and the size information of a third display area corresponding to the information editing information according to the sub video material and the corresponding information editing information;
And determining the display position information of the sub-video materials and the information editing information in the display template according to the size information of the second display area and the size information of the third display area.
In a possible implementation manner, the generating module 604 is configured, when generating push video content based on the sub video material and the corresponding information editing information, to:
determining attribute information of the sub video material;
screening target sub-video materials from the at least one sub-video material based on the attribute information of the sub-video materials;
and generating the push video content based on the target sub-video material and information editing information corresponding to the target sub-video material.
In a possible implementation manner, the attribute information of the sub-video material includes at least one of the following information:
the playing time length, the watching times and the bullet screen number.
Based on the device, scene recognition can be carried out on the video material, the video material is segmented into at least one sub-video material corresponding to different video scenes, then information pushing information is automatically generated based on the sub-video material and corresponding information editing information, on one hand, the labor cost for manually processing the video is saved, the video processing efficiency is improved, on the other hand, each sub-video material corresponding to different video scenes in the video can express a relatively complete independent scenario, therefore, the video material can be edited into creative materials suitable for different scenes by adding the information editing information to each sub-video material, and the display effect of the video material content and the corresponding information editing information is improved.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 7, a schematic diagram of a computer device 700 according to an embodiment of the disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is configured to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 701 and data exchanged with the external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the computer device 700 operates, the processor 701 and the memory 702 communicate through the bus 703, so that the processor 701 executes the following instructions:
obtaining push video content, wherein the push video content comprises at least one sub video material and information editing information corresponding to the sub video material;
playing the push video content in a page, and superposing and displaying the information editing information on the push video content based on the editing attribute corresponding to the information editing information;
And responding to the triggering operation for the push video content, and jumping from the page to an information page corresponding to the push video content.
In a possible implementation manner, the information editing information includes at least one of the following information in the instructions executed by the processor 701:
bullet screen information, source information of sub video materials, content description information of the video materials and search information for indicating search;
the editing attribute includes at least one of the following information:
display position, display time, display form and display effect.
In a possible implementation manner, in the instruction executed by the processor 701, the information page corresponding to the push video content includes: and the source page corresponding to the push video content or the download page of the target application program corresponding to the push video content.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 8, a schematic diagram of a computer device 800 according to an embodiment of the disclosure includes a processor 801, a memory 802, and a bus 803. The memory 802 is used for storing execution instructions, including a memory 8021 and an external memory 8022; the memory 8021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 801 and data exchanged with an external memory 8022 such as a hard disk, and the processor 801 exchanges data with the external memory 8022 through the memory 8021, and when the computer device 800 operates, the processor 801 and the memory 802 communicate with each other through the bus 803, so that the processor 801 executes the following instructions:
Acquiring video materials;
performing scene recognition on the acquired video material, and splitting the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene;
acquiring information editing information of the sub video material;
and generating push video content based on the sub video material and the corresponding information editing information.
In a possible implementation manner, the obtaining information editing information of the sub-video material in the instructions executed by the processor 801 includes:
determining information editing information of the sub-video materials according to the material information corresponding to the sub-video materials and/or different user attribute information; wherein the material information includes at least one of a material type, a scene type, and target object information in the material.
In a possible implementation manner, in the instructions executed by the processor 801, the determining information editing information of the sub-video material according to material information corresponding to the sub-video material and/or different user attribute information includes:
and determining the information editing information of the sub-video material according to the material information corresponding to the sub-video material and/or the corresponding relation between different user attribute information and the information editing information.
In a possible implementation manner, in the instructions executed by the processor 801, the performing scene recognition on the acquired video material, splitting the video material into at least one sub-video material includes:
sampling the video material to obtain a plurality of sampled video frames;
for each sampling video frame, determining color information of each pixel point in the sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining a segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
In a possible implementation manner, in the instructions executed by the processor 801, the color information of the pixel point includes first color information and/or second color information;
wherein the first color information comprises values of the pixel points on red, green and blue three channels respectively; the second color information includes hue, saturation, brightness.
In a possible implementation manner, the determining, in the instructions executed by the processor 801, a slicing time point of the video material based on a color average value of each sampled video frame includes:
Determining a sliced video frame based on a difference value between color means of adjacent sampled video frames;
and taking the corresponding time point of the segmentation video frame in the video material as the segmentation time point of the video material.
In a possible implementation manner, in the instructions executed by the processor 801, the performing scene recognition on the acquired video material, splitting the video material into at least one sub-video material includes:
acquiring interaction information aiming at the video material information;
at least one sub-video material is determined from the video material based on the interaction information.
In a possible implementation manner, the determining, in the instructions executed by the processor 801, at least one sub-video material from the video materials based on the interaction information includes:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the number of the groups of groups,
target interaction information containing preset target keywords is detected, and at least one sub-video material is determined from the video materials based on the interaction time stamp of the target interaction information.
In a possible implementation manner, in the instructions executed by the processor 801, the performing scene recognition on the acquired video material, splitting the video material into at least one sub-video material includes:
acquiring interaction data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target time stamps based on the interaction data respectively corresponding to the plurality of playing progress;
and splitting the video material according to the at least one pair of target time stamps to obtain at least one sub-video material.
In a possible implementation manner, in the instructions executed by the processor 801, the generating the push video content based on the sub video material and the corresponding information editing information includes:
acquiring a display template corresponding to the sub video material;
respectively determining display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video materials and the information editing information into the display template according to the determined display position information, and generating the push video content.
In a possible implementation manner, the acquiring a presentation template corresponding to the sub-video material in the instructions executed by the processor 801 includes:
Responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; or alternatively, the process may be performed,
and acquiring a display template matched with the material information corresponding to the sub-video material.
In a possible implementation manner, in the instructions executed by the processor 801, determining, according to the display template, display position information of the sub-video material and the information editing information in the display template includes:
determining the size information of a first display area corresponding to the information editing information and the size information of a second display area corresponding to the information editing information according to the sub video material and the corresponding information editing information;
and determining the display position information of the sub video materials and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
In a possible implementation manner, the instructions executed by the processor 801 generate push video content based on the sub video material and the corresponding information editing information, including:
determining attribute information of the sub video material;
screening target sub-video materials from the at least one sub-video material based on the attribute information of the sub-video materials;
And generating the push video content based on the target sub-video material and information editing information corresponding to the target sub-video material.
In a possible implementation manner, in the instructions executed by the processor 801, the attribute information of the sub-video material includes at least one of the following information:
the playing time length, the watching times and the bullet screen number.
The disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the video presentation method and the video editing method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The computer program products of the video display method and the video editing method provided in the embodiments of the present disclosure include a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the steps of the video display method and the video editing method described in the embodiments of the methods, and the embodiments of the methods may be referred to specifically and not be repeated herein.
The disclosed embodiments also provide a computer program which, when executed by a processor, implements any of the methods of the previous embodiments. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (21)

1. A video editing method, comprising:
acquiring video materials;
performing scene recognition on the acquired video material, and splitting the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene;
determining information editing information of the sub video materials according to the material information corresponding to the sub video materials and different user attribute information; the sub video material comprises a plurality of sub video materials, wherein the sub video materials are corresponding to a plurality of information editing information, a preset mapping relation exists between different information editing information and user attribute information, and the information editing information of the sub video materials comprises information editing information corresponding to different user attribute information; the information editing information comprises content description information of the video material or search information for indicating search;
and generating push video content based on the sub video material and the corresponding information editing information.
2. The method of claim 1, wherein the material information includes at least one of a material type, a scene type, and target object information in the material.
3. The method according to claim 1, wherein the determining information editing information of the sub-video material according to material information corresponding to the sub-video material and different user attribute information includes:
And determining the information editing information of the sub video material according to the corresponding relation between the material information corresponding to the sub video material and the information editing information and the different user attribute information.
4. The method of claim 1, wherein the scene recognition of the acquired video material and splitting the video material into at least one sub-video material comprises:
sampling the video material to obtain a plurality of sampled video frames;
for each sampling video frame, determining color information of each pixel point in the sampling video frame;
calculating the average value of the color information in the sampling video frame to obtain a color average value;
and determining a segmentation time point of the video material based on the color mean value of each sampling video frame, and segmenting the video material based on the segmentation time point to obtain the at least one sub-video material.
5. The method of claim 4, wherein the color information of the pixel point includes first color information and/or second color information;
wherein the first color information comprises values of the pixel points on red, green and blue three channels respectively; the second color information includes hue, saturation, brightness.
6. The method of claim 4, wherein determining the slicing time point of the video material based on the color mean of each sampled video frame comprises:
determining a sliced video frame based on a difference value between color means of adjacent sampled video frames;
and taking the corresponding time point of the segmentation video frame in the video material as the segmentation time point of the video material.
7. The method of claim 1, wherein the scene recognition of the acquired video material and splitting the video material into at least one sub-video material comprises:
acquiring interaction information aiming at the video material information;
at least one sub-video material is determined from the video material based on the interaction information.
8. The method of claim 7, wherein the determining at least one sub-video material from the video material based on the interaction information comprises:
determining an interaction time stamp of the interaction information, and determining at least one sub-video material from the video materials based on the interaction time stamp of the interaction information; and/or the number of the groups of groups,
target interaction information containing preset target keywords is detected, and at least one sub-video material is determined from the video materials based on the interaction time stamp of the target interaction information.
9. The method of claim 1, wherein the scene recognition of the acquired video material and splitting the video material into at least one sub-video material comprises:
acquiring interaction data respectively corresponding to the video materials under a plurality of playing schedules;
determining at least one pair of target time stamps based on the interaction data respectively corresponding to the plurality of playing progress;
and splitting the video material according to the at least one pair of target time stamps to obtain at least one sub-video material.
10. The method of claim 1, wherein the generating the push video content based on the sub video material and the corresponding information editing information comprises:
acquiring a display template corresponding to the sub video material;
respectively determining display position information of the sub-video materials and the information editing information in the display template according to the display template;
and adding the sub-video materials and the information editing information into the display template according to the determined display position information, and generating the push video content.
11. The method of claim 10, wherein the obtaining a presentation template corresponding to the sub-video material comprises:
Responding to a template selection instruction, and acquiring a display template corresponding to the template selection instruction; or alternatively, the process may be performed,
and acquiring a display template matched with the material information corresponding to the sub-video material.
12. The method of claim 10, wherein determining presentation location information of the sub-video material and information editing information in a presentation template, respectively, based on the presentation template, comprises:
determining the size information of a first display area corresponding to the information editing information and the size information of a second display area corresponding to the information editing information according to the sub video material and the corresponding information editing information;
and determining the display position information of the sub video materials and the information editing information in the display template according to the size information of the first display area and the size information of the second display area.
13. The method of claim 1, wherein the generating push video content based on the sub video material and the corresponding information editing information comprises:
determining attribute information of the sub video material;
screening target sub-video materials from the at least one sub-video material based on the attribute information of the sub-video materials;
And generating the push video content based on the target sub-video material and information editing information corresponding to the target sub-video material.
14. The method of claim 13, wherein the attribute information of the sub-video material includes at least one of:
the playing time length, the watching times and the bullet screen number.
15. A video presentation method, comprising:
obtaining push video content; wherein the push video content is generated based on the video editing method of any one of claims 1 to 14;
playing the push video content in a page, and superposing and displaying the information editing information on the push video content based on editing attributes corresponding to the information editing information contained in the push video content;
and responding to the triggering operation for the push video content, and jumping from the page to an information page corresponding to the push video content.
16. The method of claim 15, wherein the information editing information further comprises at least one of bullet screen information and source information of sub-video material;
the editing attribute includes at least one of the following information:
Display position, display time, display form and display effect.
17. The method of claim 15, wherein pushing the corresponding information page of the video content comprises: and the source page corresponding to the push video content or the download page of the target application program corresponding to the push video content.
18. A video editing apparatus, comprising:
the first acquisition module is used for acquiring video materials;
the first determining module is used for carrying out scene recognition on the obtained video material and splitting the video material into at least one sub-video material; wherein, the sub video material has a corresponding relation with the video scene;
the second determining module is used for determining information editing information of the sub-video materials according to the material information corresponding to the sub-video materials and different user attribute information; the sub video material comprises a plurality of sub video materials, wherein the sub video materials are corresponding to a plurality of information editing information, a preset mapping relation exists between different information editing information and user attribute information, and the information editing information of the sub video materials comprises information editing information corresponding to different user attribute information; the information editing information comprises content description information of the video material or search information for indicating search;
And the generation module is used for generating push video content based on the sub video materials and the corresponding information editing information.
19. A video display apparatus, comprising:
the second acquisition module is used for acquiring push video content; wherein the push video content is generated based on the video editing method of any one of claims 1 to 14;
the display module is used for playing the push video content in a page, and superposing and displaying the information editing information on the push video content based on editing attributes corresponding to the information editing information contained in the push video content;
and the response module is used for responding to the triggering operation for the push video content and jumping from the page to the information page corresponding to the push video content.
20. A computer device, comprising: a processor, a memory and a bus, said memory storing machine readable instructions executable by said processor, said processor and said memory communicating via the bus when the computer device is running, said machine readable instructions when executed by said processor performing the steps of the video editing method according to any of claims 1 to 14 or the steps of the video presentation method according to any of claims 15 to 17.
21. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor performs the steps of the video editing method according to any one of claims 1 to 14 or the steps of the video presentation method according to any one of claims 15 to 17.
CN202110807225.8A 2021-07-16 2021-07-16 Video display method, video editing method and device Active CN113542818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110807225.8A CN113542818B (en) 2021-07-16 2021-07-16 Video display method, video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110807225.8A CN113542818B (en) 2021-07-16 2021-07-16 Video display method, video editing method and device

Publications (2)

Publication Number Publication Date
CN113542818A CN113542818A (en) 2021-10-22
CN113542818B true CN113542818B (en) 2023-04-25

Family

ID=78099813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110807225.8A Active CN113542818B (en) 2021-07-16 2021-07-16 Video display method, video editing method and device

Country Status (1)

Country Link
CN (1) CN113542818B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763626A (en) * 2013-12-19 2014-04-30 华为软件技术有限公司 Method, device and system for pushing information
JP2019047391A (en) * 2017-09-05 2019-03-22 株式会社Jvcケンウッド Device, method and program for distributing content information with caption
CN109951741A (en) * 2017-12-21 2019-06-28 阿里巴巴集团控股有限公司 Data object information methods of exhibiting, device and electronic equipment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2430833A4 (en) * 2009-05-13 2014-01-22 Coincident Tv Inc Playing and editing linked and annotated audiovisual works
CN105448214A (en) * 2015-09-15 2016-03-30 北京合盒互动科技有限公司 Advertisement display method and device of controllable electronic screen
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN110147711B (en) * 2019-02-27 2023-11-14 腾讯科技(深圳)有限公司 Video scene recognition method and device, storage medium and electronic device
US10963702B1 (en) * 2019-09-10 2021-03-30 Huawei Technologies Co., Ltd. Method and system for video segmentation
CN111177470B (en) * 2019-12-30 2024-04-30 深圳Tcl新技术有限公司 Video processing method, video searching method and terminal equipment
CN112261472A (en) * 2020-10-19 2021-01-22 上海博泰悦臻电子设备制造有限公司 Short video generation method and related equipment
CN112689189B (en) * 2020-12-21 2023-04-21 北京字节跳动网络技术有限公司 Video display and generation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763626A (en) * 2013-12-19 2014-04-30 华为软件技术有限公司 Method, device and system for pushing information
JP2019047391A (en) * 2017-09-05 2019-03-22 株式会社Jvcケンウッド Device, method and program for distributing content information with caption
CN109951741A (en) * 2017-12-21 2019-06-28 阿里巴巴集团控股有限公司 Data object information methods of exhibiting, device and electronic equipment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template

Also Published As

Publication number Publication date
CN113542818A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
US10735494B2 (en) Media information presentation method, client, and server
US9711182B2 (en) System and method for identifying and altering images in a digital video
US9514536B2 (en) Intelligent video thumbnail selection and generation
CN110708589B (en) Information sharing method and device, storage medium and electronic device
US20180077452A1 (en) Devices, systems, methods, and media for detecting, indexing, and comparing video signals from a video display in a background scene using a camera-enabled device
US9224156B2 (en) Personalizing video content for Internet video streaming
KR100866201B1 (en) Method extraction of a interest region for multimedia mobile users
CN110858134A (en) Data, display processing method and device, electronic equipment and storage medium
CN110889379A (en) Expression package generation method and device and terminal equipment
CN103997687A (en) Techniques for adding interactive features to videos
CN113469200A (en) Data processing method and system, storage medium and computing device
CN105898379A (en) Method for establishing hyperlink of video image and server
CN113891105A (en) Picture display method and device, storage medium and electronic equipment
CN102455906A (en) Method and system for changing player skin
US20190114675A1 (en) Method and system for displaying relevant advertisements in pictures on real time dynamic basis
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN113542818B (en) Video display method, video editing method and device
CN107578306A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
CN112288877A (en) Video playing method and device, electronic equipment and storage medium
CN110019877A (en) Image search method, apparatus and system, terminal
CN114936896A (en) Commodity information display method and device, computer equipment and storage medium
WO2022171978A1 (en) A system for accessing a web page
CN113420242A (en) Shopping guide method, resource distribution method, content display method and equipment
CN112437332A (en) Method and device for playing target multimedia information
KR102414925B1 (en) Apparatus and method for product placement indication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.