CN106878773B - Electronic device, video processing method and apparatus, and storage medium - Google Patents

Electronic device, video processing method and apparatus, and storage medium Download PDF

Info

Publication number
CN106878773B
CN106878773B CN201710146598.9A CN201710146598A CN106878773B CN 106878773 B CN106878773 B CN 106878773B CN 201710146598 A CN201710146598 A CN 201710146598A CN 106878773 B CN106878773 B CN 106878773B
Authority
CN
China
Prior art keywords
target video
review
marking
video
sections
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710146598.9A
Other languages
Chinese (zh)
Other versions
CN106878773A (en
Inventor
马志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201710146598.9A priority Critical patent/CN106878773B/en
Publication of CN106878773A publication Critical patent/CN106878773A/en
Application granted granted Critical
Publication of CN106878773B publication Critical patent/CN106878773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47214End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for content reservation or setting reminders; for requesting event notification, e.g. of sport results or stock market
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a video processing method and a video processing device. The method comprises the following steps: counting the review times and/or the labeling times of the sections reviewed and/or labeled in the target video; and extracting and marking the sections of which the review times and/or the labeling times meet the preset conditions in the target video according to the statistical result. By counting the review times and/or the marking times of each section in the target video, the method can automatically mark the hard and hard point content in the target video, thereby obviously reducing the cognitive burden of a user, reducing the time for the user to search the hard and hard point content and bringing better experience to the user.

Description

Electronic device, video processing method and apparatus, and storage medium
Technical Field
Embodiments of the present invention relate to the field of image processing technologies, and in particular, to an electronic device, a video processing method, a video processing apparatus, and a computer-readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the popularization of networks and the increasing development of internet technologies, more and more people begin to learn or watch video programs through videos on video playing platforms.
On the existing video playing platform, before or during the video playing process, when a user wants to know the key content of the video, the user needs to spend time viewing and analyzing the whole video to judge which parts in the video are key contents and which parts need to be understood and viewed emphatically.
Disclosure of Invention
However, in the conventional technology, a user sometimes views the entire video, and it is difficult to determine which part is important content. Even if the user determines the important content by viewing the video, the user needs to browse the video again to find the important content when the user wants to play the important content again. This aspect exacerbates the cognitive burden on the user; on the other hand, the time of the user is wasted. Therefore, in the prior art, it is a very annoying process to require the user to judge the key content of the video by himself.
To this end, there is a great need for an improved video processing method to enable automatic capture and marking of important content in a video.
In this context, embodiments of the present disclosure are intended to provide a video processing method and a video processing apparatus.
In a first aspect of embodiments of the present disclosure, there is provided a video processing method, including:
counting the review times and/or the labeling times of the sections reviewed and/or labeled in the target video; and
and extracting and marking sections of which the review times and/or the marking times meet the preset conditions in the target video according to the statistical result.
In an embodiment of the present disclosure, extracting and marking the segments of the target video, where the review times and/or the labeling times meet the predetermined condition, according to the statistical result includes:
comparing the statistical result with a first preset value;
and extracting and marking sections of which the review times and/or the marking times are greater than or equal to the first preset value in the target video based on the comparison result.
In an embodiment of the present disclosure, extracting and marking the segments of the target video, where the review times and/or the labeling times meet the predetermined condition, according to the statistical result includes:
and extracting and marking the section with the highest review times and/or marking times in the target video according to the statistical result.
In an embodiment of the present disclosure, extracting and marking the segments of the target video, where the review times and/or the labeling times meet the predetermined condition, according to the statistical result includes:
calculating the review rate and/or the labeling rate of each section according to the review times and/or the labeling times of the reviewed and/or labeled sections in the statistical result;
comparing the review rate and/or the annotation rate with a second predetermined value;
and extracting and marking sections with the review rate and/or the annotation rate larger than or equal to the second preset value in the target video based on the comparison result.
In one embodiment of the present disclosure, counting the review times and/or the annotation times of the reviewed and/or annotated sections in the target video includes:
acquiring the time interval of the reviewed and/or labeled section in each target video from review information and/or labeling information;
dividing each time interval into a plurality of sub-time intervals according to the acquired overlapping relation of each time interval;
and counting the review times and/or the labeling times of the section in which each sub-time interval is positioned.
In one embodiment of the present disclosure, the video processing method further includes:
and dividing the target video into corresponding content sections according to the content of the target video.
In one embodiment of the present disclosure, marking a section in the target video, where the review number and/or the annotation number meet a predetermined condition, includes:
and differentially marking sections, of which the review times and/or the marking times meet the preset conditions, in the target video.
In one embodiment of the present disclosure, the video processing method further includes:
and when the marked section in the target video is played, displaying prompt information on a picture.
In one embodiment of the present disclosure, the video processing method further includes:
and displaying the annotation information on a picture when the target video is played.
In one embodiment of the present disclosure, the video processing method further includes:
receiving a user preview operation, and identifying a time point of the user preview operation in a target video;
extending on a time axis of the target video based on the time point and extracting a predetermined number of frames from the target video at predetermined time intervals;
if the extracted predetermined number of frames are all frames extracted from the marked section, dynamically playing the extracted predetermined number of frames;
otherwise, the extracted predetermined number of frames are presented on the screen, and the selected frames are played in response to the user selecting a play operation.
In one embodiment of the present disclosure, the extending on the time axis of the target video includes extending forward and backward bi-directionally, extending backward uni-directionally, or extending forward uni-directionally on the time axis of the target video.
In a second aspect of the disclosed embodiments, there is provided a video processing apparatus comprising:
the counting unit is used for counting the review times and/or the labeling times of the sections which are reviewed and/or labeled in the target video; and
and the marking unit is used for extracting and marking the sections of which the review times and/or the labeling times meet the preset conditions in the target video according to the statistical result.
In a third aspect of the disclosed embodiments, there is provided an electronic device comprising the video processing apparatus described above.
In a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a video processing method according to any one of the above.
According to the video processing method and the video processing device, the difficult and important content of the target video can be automatically marked without the need of a user to check the video for judgment, so that the cognitive burden of the user when watching the video is remarkably reduced, the time for the user to search the difficult and important content is reduced, and better experience is brought to the user.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a framework diagram of an exemplary application scenario in accordance with an embodiment of the present invention;
FIG. 2 schematically shows a flow diagram of a video processing method according to an embodiment of the invention;
FIG. 3 schematically illustrates a schematic view of a video picture marked with highlight content according to an embodiment of the present invention;
FIG. 4 schematically illustrates a diagram of a video picture displaying hint information in accordance with an embodiment of the present invention;
FIG. 5 is a diagram schematically illustrating a video screen displaying annotation information according to an embodiment of the present invention;
FIG. 6 schematically illustrates a schematic view of a video screen of an auto-preview video according to an embodiment of the present invention;
fig. 7 schematically shows a schematic diagram of a video processing apparatus according to an embodiment of the invention;
FIG. 8 schematically shows a schematic view of an electronic device according to an embodiment of the invention; and
FIG. 9 schematically shows a schematic diagram of a computer-readable storage medium product according to an embodiment of the invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the invention, a video processing method and a video processing device are provided.
In this context, it should be understood that the term review number refers to the number of times that the user repeatedly views the video in a rewinding manner, the term annotation number refers to the number of times that the user takes notes or comments, the term review rate refers to the ratio of the review number of a certain section to the total playing number of the section, and the term annotation rate refers to the ratio of the annotation number of a certain section to the total playing number of the section. Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that when users watch the target video, important and difficult content or wonderful content in the target video can be repeatedly reviewed and commented or taken notes, and the review or comment part of each user also has a certain reference value for other users.
Based on the above, the basic idea of the invention is: when the target video is watched, counting the review times and/or the labeling times of the video sections repeatedly reviewed or labeled by the user at the client or the server, and extracting and labeling the sections, the review times and/or the labeling times of which meet the preset conditions, in the target video according to the counting result. By reasonably setting the predetermined conditions, the important and difficult point content of the target video can be marked according to the review times and/or the marking times of each section in the target video. When the target video is watched again later, on one hand, the user can directly observe the marked part as the hard part, and the cognitive burden of the user is reduced; on the other hand, the method can help the user to quickly find the important and difficult part, and saves the time of the user.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
Referring first to fig. 1, fig. 1 shows a block diagram of an exemplary application scenario of an embodiment of the present invention. As shown in fig. 1, the server side counts the review times and/or the labeling times of the segments reviewed and/or labeled by the user in the target video, or the client side counts the review times and/or the labeling times of the segments reviewed and/or labeled by the user in the target video, and then sends the statistical result to the server side. Those skilled in the art will appreciate that the schematic framework shown in FIG. 1 is merely one example in which embodiments of the invention may be implemented. The scope of applicability of embodiments of the present invention is not limited in any way by this framework.
It should be noted that the client 101 shown in fig. 1 is only an exemplary example, and a user may count the review times and/or the annotation times of each section in the target video through any client having a video playing function, and more specifically, may count the review times and/or the annotation times through the client 101 installed on a smartphone or a tablet computer, and then may send the statistical results to the server 102 via a wired and/or wireless connection (e.g., Wi-Fi, LAN, cellular network, coaxial cable, etc.).
It should be further noted that the server 102 may be a local server or a remote server, and furthermore, the server 102 may also be other products capable of providing computing and storage functions, such as a cloud server, and the embodiments of the present invention are not limited specifically herein.
Based on the framework shown in fig. 1, in a first exemplary application scenario, the review times and/or the tagging times of each segment in the target video are counted by a client installed on a smartphone, a tablet computer, or a desktop computer, and a statistical result is sent to the server 102. In a second exemplary application scenario, the number of review times and/or the number of annotation times of each section in the target video are counted by the server 102.
It should be understood that in the application scenario of the present invention, although the actions of the embodiments of the present invention are described as being performed by the client 101, the actions may also be performed by the server 102, and of course, may also be performed partially by the client 101 and partially by the server 102. The invention is not limited in its implementation to the details of execution, provided that the acts disclosed in the embodiments of the invention are performed.
Exemplary method
In the following, in connection with the application scenario of fig. 1, a video processing method according to an exemplary embodiment of the invention is described with reference to fig. 2. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Fig. 2 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. Referring to fig. 2, the video processing method may include the steps of:
s210, counting the review times and/or the labeling times of the sections which are reviewed and/or labeled in the target video; and
and S220, extracting and marking sections of which the review times and/or the labeling times meet the preset conditions in the target video according to the statistical result.
According to the video processing method of the exemplary embodiment, on one hand, the review times and/or the labeling times of each section in the target video are counted, so that the review times and/or the labeling times of each section in the target video can be obtained, and whether each section is difficult and important content or not can be analyzed; on the other hand, the sections meeting the preset conditions are extracted and marked based on the statistical results, and the important and difficult point contents of the target video can be automatically marked; on the other hand, the difficult and important content of the target video can be automatically marked, so that the cognitive burden of a user when watching the video can be reduced, and the time for the user to search the difficult and important content can be saved.
Next, a video processing method in the present exemplary embodiment is described in detail.
In step S210, the review times and/or annotation times of the reviewed and/or annotated sections in the target video are counted.
In this exemplary embodiment, for a single user using the client, the review times or the labeling times of each section in the target video may be counted by the client, or the review times and the labeling times of each section in the target video may be counted by the client; the review times or the labeling times of each section in the target video can be counted by the server aiming at other users watching the target video, or the review times and the labeling times of each section in the target video can be counted by the server; the review times or the labeling times in the target video can be counted partially by the client or partially by the client aiming at all users watching the target video; the review times or the labeling times in the target video are counted partially through the server, or the review times and the labeling times in the target video are counted partially through the server, which is not particularly limited in the present disclosure.
Further, in the present exemplary embodiment, review information and/or annotation information of the target video from the user may be obtained, and the review times and/or annotation times of a plurality of sections in the target video may be counted based on the obtained review information and/or annotation information. Thus, counting the number of lookbacks and/or callouts for a section being looked back and/or callouts in the target video may include: acquiring the time interval of the reviewed and/or labeled section in each target video from review information and/or labeling information; dividing each time interval into a plurality of sub-time intervals according to the acquired overlapping relation of each time interval; and counting the review times or the labeling times of the section in which each sub-time interval is positioned, or counting the sum of the review times and the labeling times of the section in which each sub-time interval is positioned. Based on the overlapping relation of the review and/or labeled time intervals, the mutually overlapped time intervals are divided into a plurality of sub-time intervals for statistics, and the review times or the labeling times and the sum of the review times and the labeling times of the sections in which different time intervals are located can be more accurately counted.
For example, referring to table 1 below, if the time intervals of the segments to be reviewed and/or annotated in the target video acquired from the review information and/or annotation information are [03:50, 08:50], [06:50, 12:50], [10:50, 18:50], each of the time intervals may be divided into the following sub-time intervals according to the overlapping relationship of the time intervals: then, counting the review times or the marking times of the section where each time interval is located respectively, or counting the review times and the marking times of the section where each time interval is located respectively. Taking the statistics of the review times as an example, if the sections with the time intervals [03:50, 08:50] are reviewed 50 times, the sections with the time intervals [06:50, 12:50] are reviewed 100 times, and the sections with the time intervals [10:50, 18:50] are reviewed 60 times, the review times of the sections with the sub-time intervals [03:50, 06:50] are 50 times, the review times of the sections with the sub-time intervals [06:50, 08:50] are 150 times, the review times of the sections with the sub-time intervals [08:50, 10:50] are 100 times, the review times of the sections with the sub-time intervals [10:50, 12:50] are 160 times, and the review times of the sections with the sub-time intervals [12:50, 18:50] are 60 times. In this case, if the first predetermined value is set to 100, the section where the subintervals [06:50, 08:50], [08:50, 10:50] [10:50, 12:50] are located may be extracted and marked.
TABLE 1 review number of reviews
Figure BDA0001244331220000091
Next, in step S220, segments of the target video whose review times and/or annotation times meet predetermined conditions are extracted and marked according to the statistical result.
In this exemplary embodiment, the predetermined condition may be a condition preset according to factors such as the difficulty level of the target video and the time length of the target video, for example, the predetermined condition may be a first predetermined value set according to the difficulty level of the target video, the predetermined condition may be that the review number or the annotation number is the highest, or the sum of the review number and the annotation number is the highest, and the predetermined condition may also be that a review rate and/or an annotation rate obtained based on the review number and/or the annotation number needs to be satisfied, which is not particularly limited by the present disclosure.
Further, in this exemplary embodiment, in the case that the predetermined condition is the first predetermined value, extracting and marking the sections of the target video, the review times and/or the annotation times of which meet the predetermined condition, according to the statistical result may include: and comparing the statistical result with a first preset value, and extracting and marking sections of which the review times or the labeling times are greater than or equal to the first preset value in the target video or extracting and marking sections of which the sum of the review times and the labeling times is greater than or equal to the first preset value in the target video based on the comparison result. The first preset value is set according to factors such as the difficulty degree of the target video, the time length of the target video and the like, and the important and difficult point content of the target video can be marked more accurately and automatically.
Further, in this exemplary embodiment, in a case that the predetermined condition is that the review number or the annotation number is the highest, or that the sum of the review number and the annotation number is the highest, extracting and marking the section of the target video in which the review number and/or the annotation number meet the predetermined condition according to the statistical result may include: and extracting and marking the section with the highest review frequency or marking frequency in the target video according to the statistical result, or extracting and marking the section with the highest sum of the review frequency and the marking frequency in the target video according to the statistical result. In the present exemplary embodiment, in the case that the target video is short or the content of the target video is simple, the section with the highest review number and/or annotation number in the target video may be directly extracted and marked, so as to improve the video processing efficiency.
Further, in the present exemplary embodiment, it is also possible to extract and mark important point contents in the target video according to the review rate. Therefore, extracting and marking the sections, of which the review times and/or the labeling times meet the predetermined conditions, in the target video according to the statistical results may include: calculating the review rate or the labeling rate of each section according to the review times or the labeling times of the reviewed and/or labeled sections in the statistical result; comparing the review rate or the annotation rate with a second predetermined value; and extracting and marking sections with the review rate or the annotation rate larger than or equal to the second preset value in the target video based on the comparison result. In addition, extracting and marking the sections, of which the review times and/or the labeling times meet the predetermined conditions, in the target video according to the statistical result may further include: calculating the review rate and the labeling rate of each section according to the review times and the labeling times of the reviewed and/or labeled sections in the statistical result; comparing the sum of the review rate and the annotation rate with a second predetermined value; and extracting and marking sections, of which the sum of the review rate and the annotation rate is greater than or equal to the second preset value, in the target video based on the comparison result.
Specifically, in the present exemplary embodiment, the review rate may represent a ratio of the number of review times of a certain section to the total number of play times of the section, and the mark rate may represent a ratio of the number of mark times of a certain section to the total number of play times of the section. In this case, extracting and marking the sections of the target video, in which the review times and/or the annotation times meet the predetermined condition, according to the statistical result may include: comparing the counted review times of each sub-time interval with the total playing times of each sub-time interval to obtain the review rate of each sub-time interval; and comparing the counted marking times of each sub-time interval with the total playing times of each sub-time interval to obtain the marking rate of each sub-time interval, then comparing the review rate and/or the marking rate of each sub-time interval with a second preset value, and extracting and marking the sections of which the review rate and/or the marking rate are greater than or equal to the second preset value in the target video based on the comparison result.
Taking the review rate as an example, referring to the following table 2, if the time intervals of the segments reviewed in the target video acquired from the review information are [03:50, 08:50], [06:50, 12:50], [10:50, 18:50], each of the time intervals can be divided into the following sub-time intervals according to the overlapping relationship of the time intervals: [03:50, 06:50], [06:50, 08:50], [08:50, 10:50], [10:50, 12:50], [12:50, 18:50], then counting the review times and play times of the section where each time interval is located, if the section where the time interval is [03:50, 08:50] is played 150 times and reviewed 50 times, the section where the time interval is [06:50, 12:50] is played 150 times and reviewed 100 times, the section where the time interval is [10:50, 18:50] is played 170 times and reviewed 60 times, then the review rate of the section of the sub-time interval [03:50, 06:50] is 1/3, the review rate of the section [06:50, 08:50] is 1/2, the review rate of the section [08:50, 10:50] is 2/3, the review rate of the section [10:50, 12:50 is 1/2, The review rate for the [12:50, 18:50] section is 6/17. In this case, if the second predetermined value is set to 1/2, the sections where the subintervals [06:50, 08:50], [08:50, 10:50], [10:50, 12:50] are located can be extracted and marked.
TABLE 2 review Rate statistics
Figure BDA0001244331220000111
In addition, in this exemplary embodiment, the review rate may also indicate a ratio of the review number of a certain section to the total playing number of the target video, and the annotation rate may indicate a ratio of the annotation number of a certain section to the total playing number of the target video, which is similar to the above-mentioned scheme, and will not be described again here.
It should be noted that, in the present exemplary embodiment, the second predetermined value may be set according to factors such as the difficulty level of the target video and the time length of the target video, for example, the second predetermined value may be 45%, 50%, 60%, and the like, which is not particularly limited by the present disclosure.
Further, in the present exemplary embodiment, in order to make a user more intuitively know the content of the difficulty and importance of the target video when watching the video, marking the sections in the target video, where the review number and/or the marking number meet the predetermined condition, includes: and differentially marking sections, of which the review times and/or the marking times meet the preset conditions, in the target video. As shown in fig. 3, as in the bar frame of the lower area of the video screen, the sections filled with oblique lines represent viewed content, the sections filled with vertical lines represent important and difficult content, and the blank sections represent unviewed content, but example embodiments of the present disclosure are not limited thereto, and for example, corresponding content in the target video may be marked with different colors, for example, the important and difficult content of the target video may be represented by yellow filled sections, the green filled sections represent viewed content, and the white sections represent unviewed content, which is also within the scope of the present disclosure.
Further, in the present exemplary embodiment, in order to make the user know the content of the video in advance, the target video may be divided into corresponding content sections according to the content of the target video. For example, referring to fig. 3, the video screen in fig. 3 includes three content sections of "basic concept of negative score", "topic explanation of negative score", and "topic skills of negative score".
Further, in the present exemplary embodiment, in order to prompt the user when playing to the difficult and important area marked by the above-mentioned video processing method, the video processing method may further include: and when the marked section in the target video is played, displaying prompt information on a picture. Referring to FIG. 4, when the important and difficult area filled with oblique lines is to be played, the screen displays "difficult and attention is paid! 80% of the people repeatedly watch this! ".
Next, in the present exemplary embodiment, in order to enable the user to see the annotation information while viewing the video, the video processing method may further include: and displaying the annotation information on a picture when the target video is played. In the present exemplary embodiment, displaying the annotation information on the screen may include: and displaying the label information by sliding leftwards or rightwards along the horizontal direction on the picture. Referring to fig. 5, the annotation information displayed in the video frame includes: "the definition of a negative number is clear here", "this option is right" and "this is a positive number".
Further, in this exemplary embodiment, in order to be able to automatically preview video content and reduce the operation burden of the user, the video processing method may further include: receiving a user preview operation, and identifying a time point of the user preview operation in a target video; extending on a time axis of the target video based on the time point and extracting a predetermined number of frames from the target video at predetermined time intervals; if the extracted predetermined number of frames are all frames extracted from the marked section, dynamically playing the extracted predetermined number of frames; otherwise, the extracted predetermined number of frames are presented on the screen, and the selected frames are played in response to the user selecting a play operation.
It should be noted that, in this exemplary embodiment, the user preview operation may be sliding the progress slider, selecting a preview play according to a preview play option, or selecting a time point at which the preview play is desired by clicking, which is not limited in this disclosure.
Referring to fig. 6, upon receiving a user preview operation, extending rightward on a time axis of a target video based on a time point at which the preview operation is located, and extracting a predetermined number of frames, for example, 8 frames, from the target video at predetermined time intervals, for example, 10 seconds, if all of the extracted frames of predetermined data are frames extracted from marked important and difficult areas, for example, sections with the highest review rate and marking rate, the extracted frames are dynamically played, otherwise the extracted frames are presented on a video screen. In fig. 6, a current frame picture is displayed in the middle of a video picture, and extracted frames are displayed above and below the current frame picture. When the user selects, for example, clicking any extracted frame, the time point corresponding to the frame can be jumped to, and the selected frame can be played when the time length of no operation reaches a preset time, for example, 2 seconds after the user clicks the operation.
Further, in the present exemplary embodiment, the extending on the time axis of the target video includes extending forward and backward bi-directionally, extending backward uni-directionally, and extending forward uni-directionally on the time axis of the target video, which the present disclosure is not particularly limited.
It should be noted that, in the present exemplary embodiment, the predetermined time interval may be 8 seconds, 10 seconds, or 15 seconds, or may be any other suitable time, which is not particularly limited by the present disclosure.
It should be noted that, in the present exemplary embodiment, the extracted predetermined number of frames may be 8 frames, 10 frames, or 15 frames, or may be any other suitable number of frames, which is not particularly limited by the present disclosure.
Exemplary devices
Having introduced the method of an exemplary embodiment of the present invention, a video processing apparatus according to an exemplary embodiment of the present invention will be described in detail with reference to fig. 7. As shown in fig. 7, the video processing apparatus 700 may include: a statistic unit 710 and a marking unit 720. Wherein:
the counting unit 710 is configured to count review times and/or labeling times of a reviewed and/or labeled section in the target video; and
the marking unit 720 is configured to extract and mark a section of the target video where the review number and/or the labeling number meet a predetermined condition according to the statistical result.
According to the video processing apparatus of the present exemplary embodiment, on one hand, the review times and/or the labeling times of each section in the target video are counted, so that the review times and/or the labeling times of each section in the target video can be obtained, which is favorable for analyzing whether each section is a content of difficulty and importance; on the other hand, the sections meeting the preset conditions are extracted and marked based on the statistical results, and the important and difficult point contents of the target video can be automatically marked; on the other hand, the difficult and important content of the target video can be automatically marked, so that the cognitive burden of a user when watching the video can be reduced, and the time for the user to search the difficult and important content can be saved.
Further, in the present exemplary embodiment, extracting and marking the sections of the target video, in which the review times and/or the annotation times meet the predetermined condition, according to the statistical result may include: comparing the statistical result with a first preset value; and extracting and marking sections of which the review times and/or the marking times are greater than or equal to the first preset value in the target video based on the comparison result.
Further, in the present exemplary embodiment, extracting and marking the sections of the target video, in which the review times and/or the annotation times meet the predetermined condition, according to the statistical result may include: and extracting and marking the section with the highest review times and/or marking times in the target video according to the statistical result.
Furthermore, in this exemplary embodiment, extracting and marking the sections of the target video, in which the review times and/or the annotation times meet the predetermined condition, according to the statistical result may further include: calculating the review rate and/or the labeling rate of each section according to the review times and/or the labeling times of the reviewed and/or labeled sections in the statistical result; comparing the review rate and/or the annotation rate with a second predetermined value; and extracting and marking sections with the review rate and/or the annotation rate larger than or equal to the second preset value in the target video based on the comparison result.
Further, in this example embodiment, counting the number of lookback times and/or the number of annotation times of the reviewed and/or annotated segment in the target video may include: acquiring the time interval of the reviewed and/or labeled section in each target video from review information and/or labeling information; dividing each time interval into a plurality of sub-time intervals according to the acquired overlapping relation of each time interval; and counting the review times and/or the labeling times of the section in which each sub-time interval is positioned.
Further, in the present exemplary embodiment, the video processing apparatus may further include: and the content dividing unit is used for dividing the target video into corresponding content sections according to the content of the target video.
Further, in the present exemplary embodiment, marking the section in which the review number and/or the annotation number in the target video meet the predetermined condition may include: and differentially marking sections, of which the review times and/or the marking times meet the preset conditions, in the target video.
Further, in the present exemplary embodiment, the video processing apparatus may further include: and the prompt information display unit is used for displaying prompt information on a picture when the marked section in the target video is played.
Further, in the present exemplary embodiment, the video processing apparatus may further include: and the annotation information display unit is used for displaying the annotation information on the picture when the target video is played.
Further, in this exemplary embodiment, the video processing apparatus may further include: the identification unit is used for receiving the user preview operation and identifying the time point of the user preview operation in the target video; a playback unit configured to extend on a time axis of the target video based on the time point and extract a predetermined number of frames from the target video at predetermined time intervals; if the extracted predetermined number of frames are all frames extracted from the marked section, dynamically playing the extracted predetermined number of frames; otherwise, the extracted predetermined number of frames are presented on the screen, and the selected frames are played in response to the user selecting a play operation.
Further, in the present exemplary embodiment, the extending on the time axis of the target video may include extending forward and backward bi-directionally, extending backward uni-directionally, or extending forward uni-directionally on the time axis of the target video.
Exemplary device
Having described the method and apparatus of an exemplary embodiment of the present invention, an electronic device in accordance with another exemplary embodiment of the present invention is described.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, an electronic device according to the invention may comprise at least one processing unit, and at least one memory unit. Wherein the storage unit stores program code that, when executed by the processing unit, causes the processing unit to perform the steps in the video processing method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of the present specification. For example, the processing unit may perform step S210 as shown in fig. 2: counting the review times and/or the labeling times of the sections reviewed and/or labeled in the target video; and step S220: and extracting and marking sections of which the review times and/or the marking times meet the preset conditions in the target video according to the statistical result. .
A video processing device 800 according to this embodiment of the invention is described below with reference to fig. 8. The video processing apparatus 800 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the video processing device 800 is in the form of a general purpose computing device. The components of the video processing device 800 may include, but are not limited to: the at least one processing unit 801, the at least one memory unit 802, and a bus 803 that couples various system components including the memory unit 802 and the processing unit 801.
The bus 803 includes a data bus, a control bus, and an address bus.
The storage unit 802 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)8021 and/or cache memory 8022, and may further include Read Only Memory (ROM) 8023.
Storage unit 802 can also include a program/utility 8025 having a set (at least one) of program modules 8024, such program modules 8024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The video processing device 800 may also communicate with one or more external devices 804 (e.g., a keyboard, a pointing device, a bluetooth device, a display device, etc.). Such communication may be through input/output (I/O) interfaces 805. Also, the video processing device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 806. As shown, the network adapter 806 communicates with the other modules of the video processing device 800 over the bus 803. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the video processing device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Exemplary program product
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a device to perform the steps in the video processing method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of this specification, when the program product is run on the device, for example, the device may perform the step S210 as shown in fig. 2: counting the review times and/or the labeling times of the sections reviewed and/or labeled in the target video; and step S220: and extracting and marking sections of which the review times and/or the marking times meet the preset conditions in the target video according to the statistical result.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 9, a program product 900 for video processing according to an embodiment of the invention is depicted, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although in the above detailed description several units or sub-units of the video processing apparatus are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the invention. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (20)

1. A video processing method, comprising:
counting the review times and/or the labeling times of the sections reviewed and/or labeled in the target video; and
extracting and marking sections of which the review times and/or the marking times meet preset conditions in the target video according to the statistical result;
receiving a user preview operation, and identifying a time point of the user preview operation in a target video;
extending on a time axis of the target video based on the time point and extracting a predetermined number of frames from the target video at predetermined time intervals;
if the extracted predetermined number of frames are all frames extracted from the marked section, dynamically playing the extracted predetermined number of frames;
otherwise, presenting the extracted predetermined number of frames on the screen, and playing the selected frames in response to the user selecting a playing operation;
wherein, counting the review times and/or the labeling times of the reviewed and/or labeled sections in the target video comprises:
acquiring the time interval of the reviewed and/or labeled section in each target video from review information and/or labeling information;
dividing each time interval into a plurality of sub-time intervals according to the acquired overlapping relation of each time interval;
and counting the review times and/or the labeling times of the section in which each sub-time interval is positioned.
2. The video processing method according to claim 1, wherein extracting and marking the segments of the target video for which the review times and/or the marking times meet the predetermined condition according to the statistical result comprises:
comparing the statistical result with a first preset value;
and extracting and marking sections of which the review times and/or the marking times are greater than or equal to the first preset value in the target video based on the comparison result.
3. The video processing method according to claim 1, wherein extracting and marking the segments of the target video for which the review times and/or the marking times meet the predetermined condition according to the statistical result comprises:
and extracting and marking the section with the highest review times and/or marking times in the target video according to the statistical result.
4. The video processing method according to claim 1, wherein extracting and marking the segments of the target video for which the review times and/or the marking times meet the predetermined condition according to the statistical result comprises:
calculating the review rate and/or the labeling rate of each section according to the review times and/or the labeling times of the reviewed and/or labeled sections in the statistical result;
comparing the review rate and/or the annotation rate with a second predetermined value;
and extracting and marking sections with the review rate and/or the annotation rate larger than or equal to the second preset value in the target video based on the comparison result.
5. The video processing method of claim 1, wherein the video processing method further comprises:
and dividing the target video into corresponding content sections according to the content of the target video.
6. The video processing method according to claim 1, wherein marking the sections in the target video for which the review count and/or the annotation count meet a predetermined condition comprises:
and differentially marking sections, of which the review times and/or the marking times meet the preset conditions, in the target video.
7. The video processing method of claim 1, wherein the video processing method further comprises:
and when the marked section in the target video is played, displaying prompt information on a picture.
8. The video processing method of claim 1, wherein the video processing method further comprises:
and displaying the annotation information on a picture when the target video is played.
9. The video processing method according to claim 1, wherein extending on the time axis of the target video comprises extending forward and backward bi-directionally, extending backward uni-directionally, or extending forward uni-directionally on the time axis of the target video.
10. A video processing apparatus comprising:
the counting unit is used for counting the review times and/or the labeling times of the sections which are reviewed and/or labeled in the target video; and
the marking unit is used for extracting and marking sections of which the review times and/or the labeling times meet the preset conditions in the target video according to the statistical result;
the identification unit is used for receiving the user preview operation and identifying the time point of the user preview operation in the target video;
a playback unit configured to extend on a time axis of the target video based on the time point and extract a predetermined number of frames from the target video at predetermined time intervals; if the extracted predetermined number of frames are all frames extracted from the marked section, dynamically playing the extracted predetermined number of frames; otherwise, presenting the extracted predetermined number of frames on the screen, and playing the selected frames in response to the user selecting a playing operation;
wherein the statistical unit is configured to:
acquiring the time interval of the reviewed and/or labeled section in each target video from review information and/or labeling information;
dividing each time interval into a plurality of sub-time intervals according to the acquired overlapping relation of each time interval;
and counting the review times and/or the labeling times of the section in which each sub-time interval is positioned.
11. The video processing apparatus according to claim 10, wherein extracting and marking the segments of the target video whose review number and/or marking number meet a predetermined condition according to the statistical result comprises:
comparing the statistical result with a first preset value;
and extracting and marking sections of which the review times and/or the marking times are greater than or equal to the first preset value in the target video based on the comparison result.
12. The video processing apparatus according to claim 10, wherein extracting and marking the segments of the target video whose review number and/or marking number meet a predetermined condition according to the statistical result comprises:
and extracting and marking the section with the highest review times and/or marking times in the target video according to the statistical result.
13. The video processing apparatus according to claim 10, wherein extracting and marking the segments of the target video whose review number and/or marking number meet a predetermined condition according to the statistical result comprises:
calculating the review rate and/or the labeling rate of each section according to the review times and/or the labeling times of the reviewed and/or labeled sections in the statistical result;
comparing the review rate and/or the annotation rate with a second predetermined value;
and extracting and marking sections with the review rate and/or the annotation rate larger than or equal to the second preset value in the target video based on the comparison result.
14. The video processing device according to claim 10, wherein the video processing device further comprises:
and the content dividing unit is used for dividing the target video into corresponding content sections according to the content of the target video.
15. The video processing apparatus according to claim 10, wherein marking the sections in the target video for which the review count and/or the annotation count satisfy a predetermined condition comprises:
and differentially marking sections, of which the review times and/or the marking times meet the preset conditions, in the target video.
16. The video processing device according to claim 10, wherein the video processing device further comprises:
and the prompt information display unit is used for displaying prompt information on a picture when the marked section in the target video is played.
17. The video processing device according to claim 10, wherein the video processing device further comprises:
and the annotation information display unit is used for displaying the annotation information on the picture when the target video is played.
18. The video processing apparatus according to claim 10, wherein the extending on the time axis of the target video comprises extending forward and backward bi-directionally, extending backward uni-directionally, or extending forward uni-directionally on the time axis of the target video.
19. An electronic device, comprising
A processing unit; and
a storage unit having stored thereon a computer program which, when executed by the processing unit, implements the video processing method according to any one of claims 1 to 9.
20. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the video processing method according to any one of claims 1 to 9.
CN201710146598.9A 2017-03-13 2017-03-13 Electronic device, video processing method and apparatus, and storage medium Active CN106878773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710146598.9A CN106878773B (en) 2017-03-13 2017-03-13 Electronic device, video processing method and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710146598.9A CN106878773B (en) 2017-03-13 2017-03-13 Electronic device, video processing method and apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN106878773A CN106878773A (en) 2017-06-20
CN106878773B true CN106878773B (en) 2020-04-28

Family

ID=59170732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710146598.9A Active CN106878773B (en) 2017-03-13 2017-03-13 Electronic device, video processing method and apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN106878773B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197378A (en) * 2017-06-23 2017-09-22 深圳天珑无线科技有限公司 A kind of processing method and processing device of video information
CN108024146A (en) * 2017-12-14 2018-05-11 深圳Tcl数字技术有限公司 News interface automatic setting method, smart television and computer-readable recording medium
CN110099308B (en) * 2019-05-15 2022-06-10 浙江传媒学院 Method for quickly segmenting and extracting hot interval of audio/video program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1969274A (en) * 2004-05-04 2007-05-23 汤姆森许可贸易公司 Method and apparatus for user reproducing a user-prefered document out of a plurality of documents
CN101783915A (en) * 2010-03-19 2010-07-21 北京国双科技有限公司 Method for realizing video quantification
CN102955858A (en) * 2012-11-09 2013-03-06 北京百度网讯科技有限公司 Method, system and server for video file searching and sequencing
CN104092690A (en) * 2014-07-15 2014-10-08 金亚科技股份有限公司 System and method for controlling media stream replaying bandwidth of streaming media
CN104662836A (en) * 2012-09-28 2015-05-27 三星电子株式会社 Apparatus and method for transmitting/receiving buffering data in media streaming service
CN105848001A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Video playback control method and video playback control device
CN106170104A (en) * 2016-07-01 2016-11-30 广州华多网络科技有限公司 Determination method, device and the server of video highlight fragment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068614B2 (en) * 2013-04-26 2018-09-04 Microsoft Technology Licensing, Llc Video service with automated video timeline curation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1969274A (en) * 2004-05-04 2007-05-23 汤姆森许可贸易公司 Method and apparatus for user reproducing a user-prefered document out of a plurality of documents
CN101783915A (en) * 2010-03-19 2010-07-21 北京国双科技有限公司 Method for realizing video quantification
CN104662836A (en) * 2012-09-28 2015-05-27 三星电子株式会社 Apparatus and method for transmitting/receiving buffering data in media streaming service
CN102955858A (en) * 2012-11-09 2013-03-06 北京百度网讯科技有限公司 Method, system and server for video file searching and sequencing
CN104092690A (en) * 2014-07-15 2014-10-08 金亚科技股份有限公司 System and method for controlling media stream replaying bandwidth of streaming media
CN105848001A (en) * 2016-03-28 2016-08-10 乐视控股(北京)有限公司 Video playback control method and video playback control device
CN106170104A (en) * 2016-07-01 2016-11-30 广州华多网络科技有限公司 Determination method, device and the server of video highlight fragment

Also Published As

Publication number Publication date
CN106878773A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN106303723B (en) Video processing method and device
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
US20190130185A1 (en) Visualization of Tagging Relevance to Video
US11205254B2 (en) System and method for identifying and obscuring objectionable content
US20210160553A1 (en) Method and system of displaying a video
US20140304730A1 (en) Methods and apparatus for mandatory video viewing
US9525896B2 (en) Automatic summarizing of media content
CN109120954B (en) Video message pushing method and device, computer equipment and storage medium
CN106878773B (en) Electronic device, video processing method and apparatus, and storage medium
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN111263186A (en) Video generation, playing, searching and processing method, device and storage medium
US20190379919A1 (en) System and method for perspective switching during video access
CN113727170A (en) Video interaction method, device, equipment and medium
CN112866809A (en) Video processing method and device, electronic equipment and readable storage medium
CN113128185A (en) Interaction method and device and electronic equipment
US20170004859A1 (en) User created textbook
US20170161871A1 (en) Method and electronic device for previewing picture on intelligent terminal
CN112887794A (en) Video editing method and device
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN112738629B (en) Video display method and device, electronic equipment and storage medium
CN115424125A (en) Media content processing method, device, equipment, readable storage medium and product
KR20150090097A (en) Enhanced information collection environments
CN110703971A (en) Method and device for publishing information
US20170139547A1 (en) Recognition and display of reading progress
US20230276102A1 (en) Object-based video commenting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant