CN110602560B - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN110602560B
CN110602560B CN201810600390.4A CN201810600390A CN110602560B CN 110602560 B CN110602560 B CN 110602560B CN 201810600390 A CN201810600390 A CN 201810600390A CN 110602560 B CN110602560 B CN 110602560B
Authority
CN
China
Prior art keywords
video
time
time point
segment
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810600390.4A
Other languages
Chinese (zh)
Other versions
CN110602560A (en
Inventor
周杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201810600390.4A priority Critical patent/CN110602560B/en
Publication of CN110602560A publication Critical patent/CN110602560A/en
Application granted granted Critical
Publication of CN110602560B publication Critical patent/CN110602560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The disclosure relates to a video processing method and device. The method comprises the following steps: in the process of recording the live video, taking the recorded part of the live video as a target video; playing the target video in a first playing window of the processing interface; when a marking operation aiming at the playing time point of the target video is detected, determining the marking time point of the target video; when the segment extraction operation aiming at the target video is detected, the video segment of the target video is obtained according to the selected first time point in the marked time points. The video processing method and the video processing device provided by the embodiment of the disclosure can convert live video into video clips which can be viewed by a user on demand, and have the advantages of high efficiency, high speed and high accuracy in video clip extraction, thereby simplifying the operation process of video clip acquisition and shortening the online time of the video clips.

Description

Video processing method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method and apparatus.
Background
In the related art, a user can watch live television programs such as live synthesis art and television through the television. The video website operator can record the live television program to obtain the recorded video after obtaining the copyright of the television program, and provide the obtained recorded video on the video website after the live television program is live, so that the live television program is converted into the on-demand recorded video for the user to check. However, in the related technologies, the efficiency of converting live broadcasting to on-demand broadcasting is low, the speed is slow, the waiting time of the user is long, and the watching requirements of the user cannot be met.
Disclosure of Invention
In view of this, the present disclosure provides a video processing method and apparatus.
According to a first aspect of the present disclosure, there is provided a video processing method, the method comprising:
in the process of recording the live video, taking the recorded part of the live video as a target video;
playing the target video in a first playing window of a processing interface;
when a marking operation aiming at the playing time point of the target video is detected, determining the marking time point of the target video;
when the segment extraction operation aiming at the target video is detected, the video segment of the target video is obtained according to the selected first time point in the marked time points.
For the above method, in a possible implementation manner, when a segment extracting operation for the target video is detected, obtaining a video segment of the target video according to a selected first time point of the marked time points, including:
when video clips to be deleted exist in a first time period defined by the first time point, obtaining a plurality of first video clips to be synthesized which do not contain the video clips to be deleted according to the first time point and the starting time point and the ending time point of the video clips to be deleted;
and synthesizing the plurality of first video clips to be synthesized to obtain the video clips.
For the above method, in a possible implementation manner, the selected first time point includes multiple pairs of first time points, and obtaining a video clip of the target video according to the selected first time point in the marked time points includes:
respectively acquiring a second video clip to be synthesized corresponding to each pair of first time points;
and synthesizing the plurality of second video clips to be synthesized to obtain the video clips.
For the above method, in one possible implementation, the method further includes:
and when the segment deleting operation aiming at the target video is detected, determining the video segment to be deleted according to the starting time point and the ending time point of the selected video segment to be deleted in the marked time points.
For the above method, in one possible implementation, the method further includes:
when detecting a label setting operation for a selected second time point in the marked time points, adding a label to the second time point, wherein the label comprises at least one of a category and video content corresponding to the second time point.
For the above method, in one possible implementation, the method further includes:
transcoding the video segment to obtain a transcoded video segment;
and sending the transcoded video clip to a cloud server.
For the above method, in one possible implementation, the method further includes:
displaying video information of the video clip in a task list of the processing interface,
wherein the video information comprises at least one of: the name of the video clip, the duration of the video clip, the processing state of the video clip, the generation time of the video clip, and the identification of the video clip.
For the above method, in one possible implementation, the method further includes:
displaying a timeline of the target video in the processing interface,
the time axis comprises at least one of the marking time point, a label of the marking time point, a position of a current video frame shown in the first playing window in the time axis, a time period corresponding to the video clip and a time period corresponding to the video clip to be deleted, and the label comprises at least one of a category corresponding to the marking time point and video content.
For the above method, in one possible implementation, the method further includes:
adjusting the playing progress of the target video in the first playing window according to the detected playing progress adjusting operation,
wherein the play progress adjustment operation includes any one of: start/pause play operations, advance one or more video clip operations, retreat one or more video clip operations, advance one or more video frame operations, retreat one or more video frame operations, advance one or more video key frame operations, retreat one or more video key frame operations, advance one or more preset time period operations, retreat one or more preset time period operations, play one or more video frame operations at the beginning of the target video, and play one or more video frame operations at the end of the target video.
For the above method, in one possible implementation, the method further includes:
presenting time information related to the target video in the first play window,
wherein the time information comprises at least one of: the relative time of the current video frame shown in the first playing window in the target video, the duration of the target video, the actual time of the current video frame in the live video corresponding to the target video and the actual time of the current moment.
For the above method, in one possible implementation, the method further includes:
playing the target video in a second playing window of the processing interface, and keeping the playing progress of the second playing window as the real-time recording progress of the target video;
and displaying the actual time corresponding to the real-time recording progress in the second playing window.
For the above method, in one possible implementation, the marking of the time point includes at least one of: the clip extraction time point of the target video, the viewpoint time point, the leader end time point, the trailer start time point, the start time point and the end time point of the creative insertion, and the start time point and the end time point of the video clip to be deleted,
wherein the video segment is tagged with a third point in time, the third point in time comprising a tagged point in time within a time period corresponding to the video segment.
According to a second aspect of the present disclosure, there is provided a video processing apparatus, the apparatus comprising:
the live video recording module is used for taking the recorded part of the live video as a target video in the process of recording the live video;
the first video playing module plays the target video in a first playing window of the processing interface;
the time point marking module is used for determining the marking time point of the target video when the marking operation aiming at the playing time point of the target video is detected;
and the video clip extraction module is used for obtaining the video clip of the target video according to the selected first time point in the marked time points when the clip extraction operation aiming at the target video is detected.
For the apparatus, in a possible implementation manner, the video segment extracting module includes:
a first to-be-synthesized segment extraction sub-module, configured to, when a to-be-deleted video segment exists within a first time period defined by the first time point, obtain, according to the first time point and start and end time points of the to-be-deleted video segment, a plurality of first to-be-synthesized video segments that do not include the to-be-deleted video segment;
and the first synthesis processing sub-module is used for synthesizing the plurality of first video clips to be synthesized to obtain the video clips.
For the apparatus, in a possible implementation manner, the selected first time point includes a plurality of pairs of first time points, and the video segment extracting module includes:
the second to-be-synthesized segment extraction sub-module is used for respectively acquiring second to-be-synthesized video segments corresponding to each pair of first time points;
and the second synthesis processing submodule is used for synthesizing a plurality of second video clips to be synthesized to obtain the video clips.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
and the to-be-deleted video clip determining module is used for determining the to-be-deleted video clip according to the starting time point and the ending time point of the selected to-be-deleted video clip in the marked time points when the clip deletion operation aiming at the target video is detected.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
and the label setting module is used for adding a label to a second time point when detecting label setting operation aiming at the selected second time point in the marked time points, wherein the label comprises at least one of a category and video content corresponding to the second time point.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
the video segment transcoding module is used for transcoding the video segment to obtain a transcoded video segment;
and the video segment sending module is used for sending the transcoded video segment to a cloud server.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
a video information display module for displaying the video information of the video clip in the task list of the processing interface,
wherein the video information comprises at least one of: the name of the video clip, the duration of the video clip, the processing state of the video clip, the generation time of the video clip, and the identification of the video clip.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
a timeline presentation module that presents a timeline of the target video in the processing interface,
the time axis comprises at least one of the marking time point, a label of the marking time point, a position of a current video frame shown in the first playing window in the time axis, a time period corresponding to the video clip and a time period corresponding to the video clip to be deleted, and the label comprises at least one of a category corresponding to the marking time point and video content.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
a playing progress adjusting module for adjusting the playing progress of the target video in the first playing window according to the detected playing progress adjusting operation,
wherein the play progress adjustment operation includes any one of: start/pause play operations, advance one or more video clip operations, retreat one or more video clip operations, advance one or more video frame operations, retreat one or more video frame operations, advance one or more video key frame operations, retreat one or more video key frame operations, advance one or more preset time period operations, retreat one or more preset time period operations, play one or more video frame operations at the beginning of the target video, and play one or more video frame operations at the end of the target video.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
a time information presentation module that presents time information related to the target video in the first play window,
wherein the time information comprises at least one of: the relative time of the current video frame shown in the first playing window in the target video, the duration of the target video, the actual time of the current video frame in the live video corresponding to the target video and the actual time of the current moment.
For the above apparatus, in one possible implementation manner, the apparatus further includes:
the second video playing module plays the target video in a second playing window of the processing interface and keeps the playing progress of the second playing window as the real-time recording progress of the target video;
and the recording progress display module is used for displaying the actual time corresponding to the real-time recording progress in the second playing window.
For the above apparatus, in one possible implementation, the marking time point includes at least one of: the clip extraction time point of the target video, the viewpoint time point, the leader end time point, the trailer start time point, the start time point and the end time point of the creative insertion, and the start time point and the end time point of the video clip to be deleted,
wherein the video segment is tagged with a third point in time, the third point in time comprising a tagged point in time within a time period corresponding to the video segment.
According to a third aspect of the present disclosure, there is provided a video processing apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the video processing method described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described video processing method.
According to the video processing method and device provided by the embodiment of the disclosure, in the process of recording the live video, the recorded part of the live video is used as the target video; playing a target video in a first playing window of the processing interface; when a marking operation aiming at the playing time point of the target video is detected, determining the marking time point of the target video; when the segment extraction operation aiming at the target video is detected, the video segment of the target video is obtained according to the selected first time point in the marked time points. The live video can be converted into the video clip for the user to view on demand, the efficiency of extracting the video clip is high, the speed is high, the accuracy is high, the operation process of obtaining the video clip is simplified, and the online time of the video clip is shortened.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of step S104 in a video processing method according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of step S104 in a video processing method according to an embodiment of the present disclosure.
Fig. 4 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 5 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 6 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 7 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 8 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 9 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 10 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 11 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 12 is a schematic diagram illustrating an application scenario of a video processing method according to an embodiment of the present disclosure.
Fig. 13 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
Fig. 14 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
Fig. 15 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
Fig. 16 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may be applied to a terminal (e.g., a computer) or a server (e.g., a cloud server). The method includes steps S101 to S104.
In step S101, in the process of recording the live video, the recorded portion of the live video is taken as the target video.
In step S102, the target video is played in a first play window of the processing interface.
In step S103, upon detection of a marking operation for a play time point of the target video, a marking time point of the target video is determined.
In step S104, when the segment extracting operation for the target video is detected, a video segment of the target video is obtained according to the selected first time point of the marked time points.
In this embodiment, the live video may be a video live on a television station, a website, and the like, and the video may include a tv show, a synthesis art, an evening, a concert, and the like, which is not limited by this disclosure. The live broadcast information can be recorded according to the live broadcast information of the live broadcast video, and the live broadcast information can comprise information related to the live broadcast video, such as the live broadcast time and the broadcast time of the live broadcast video. The live information may also include a live source of the live video, e.g., the live source may be information of a television station that is live broadcasting the live video, etc. Live broadcast information may be preset, or a live broadcast video may also be recorded according to detected live broadcast information configured by a video processing person in real time, which is not limited by the present disclosure.
In this embodiment, the recorded part of the live video is used as the target video, so that the recorded part of the live video can be obtained in real time in the live video live broadcasting process, and the target video is continuously and automatically updated, so that the target video is kept as all recorded parts of the live video. The latest target video (recorded part of the live video) can be obtained according to the detected target video updating operation, so that when the conditions of updating failure, playing blockage and the like occur due to factors such as network speed, equipment processing speed and the like in the automatic updating of the target video, the video processing personnel can be ensured to manually update the target video, and the target video in the first playing window is recorded latest all the time. For example, an update target video operation control for updating the target video may be presented in the first playing window, and the target video may be updated when a click, a double click, or other trigger operation for the update target video operation control is detected.
In this embodiment, the processing interface is an interface displayed for the video processing staff and convenient for the video processing staff to process the target video based on the processing interface. The processing interface may include a playing window for playing the target video, and may further include a control for processing the target video, and when it is detected that the control is triggered, the target video is correspondingly processed.
In this embodiment, part or all of the marked time points may be marked in the target video, so as to facilitate subsequent processing of the target video based on the marked time points of the target video. In addition, the extracted video clip can also include a mark time point, so that the video clip can be conveniently processed at the later stage and checked by a user after online.
In this embodiment, the playing of the target video may also be adjusted according to the detected adjustment operations for the size, volume, definition, and the like of the target video. For example, a volume adjustment control may be presented in the first play window, and the volume of the target video is increased or decreased when an adjustment operation for the volume adjustment control is detected. The definition adjusting control can be displayed in the first playing window, different definition options are provided for video processing personnel when the adjusting operation aiming at the definition adjusting control is detected, and the target video played in the first playing window is displayed according to the selected definition options.
In one possible implementation, the marking of the time point may include at least one of: the method comprises the steps of extracting a target video clip, a target video viewpoint time point, a title end time point, a title start time point, a title end time point, a start time point and an end time point of creative insertion, and a video clip to be deleted. Wherein the video segment is tagged with a third point in time, the third point in time comprising a tagged point in time within a time period corresponding to the video segment.
The segment extraction time point may be any time point at which video segment extraction may be performed in the target video, for example, the segment extraction time point may be a time point at which a certain character or person appears in the target video, a time point at which a scenario transition occurs in the target video, and a time point at which a scene transition occurs in the target video. The point-of-view time point may be a time point corresponding to a dramatic or worthwhile story in the target video. For example, the viewpoint time point may be a point in time when there is a quarrel between actors in the target video, and when the audience is applause in the program. The head end time point may be a time point at which the target video ends the head play, starts the play of the head (e.g., a feature of a tv show, a feature of a movie, actual program content of a tv program, etc.), and the head of the target video may be a head song portion of the target video, a trailer before the start of the feature of the target video, an advertisement, etc., such as a head song of a tv show, a movie, etc., a trailer of a tv program, an advertisement head of a tv program, etc. The end-of-title start time point may be a time point when the target video ends the main-title playing and starts the end-of-title playing. The trailer of the target video may be content related to the target video played after the feature of the target video is finished, such as a trailer of a tv series, a movie, etc., a thank you of a tv program, a trailer, an advertisement, etc. The creative insert may be a contextual episode that appears in the target video, which is performed by a character in the target video, and the contextual episode may be used to advertise the target video, advertise a particular product, and the like. The video segment to be deleted may be a video segment that is not suitable for being extracted in the target video, for example, the video segment to be deleted may be an advertisement segment in the target video, violence, a bloody smell, a non-civilization segment, or the like in the target video. The tag of the marked time point may be a type and video content corresponding to the marked time point, so that the user may determine the video content corresponding to the third time point according to the tag. The person skilled in the art can set the marking time point according to actual needs, and the disclosure does not limit this.
In this embodiment, a marking operation control corresponding to a marking operation may be presented in the processing interface, and when a trigger operation such as a click, a double click, or a touch on the marking operation control is detected, a marking time point of the target video is determined in combination with a time point selected before or after the trigger operation. The number of the controls can be one or more, the controls can be set according to different corresponding identification time points, one control is set for the marking time point position needing to be added with the label, and the other control is set for the marking time point needing not to be added with the label. For example, one control may be set for the segment extraction time point and one control may be set for the remaining marking time points.
In this embodiment, a first segment extraction operation control for segment extraction may be presented in a processing interface, and when a trigger operation such as clicking, double-clicking, touching, or the like is detected for the first segment extraction operation control, a selected first time point is determined from the marked time points in combination with an operation of selecting the time point detected before or after the trigger operation. And then, extracting a corresponding video segment from the target video according to the first time point.
In this embodiment, when the first time point corresponding to the video segment changes, a new video segment is obtained according to the changed first time point, and corresponding segment update is performed based on the new video segment, so that the operation of updating the video segment is simplified, and the speed of updating the video segment is increased.
According to the video processing method provided by the embodiment of the disclosure, in the process of recording the live video, the recorded part of the live video is used as the target video; playing the target video in a first playing window of the processing interface; when a marking operation aiming at the playing time point of the target video is detected, determining the marking time point of the target video; when the segment extraction operation aiming at the target video is detected, the video segment of the target video is obtained according to the selected first time point in the marked time points. The live video can be converted into the video clip for the user to view on demand, the efficiency of extracting the video clip is high, the speed is high, the accuracy is high, the operation process of obtaining the video clip is simplified, and the online time of the video clip is shortened.
Fig. 2 shows a flowchart of step S104 in a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2, step S104 may include step S1041 and step S1042.
In step S1041, when there is a video segment to be deleted within a first time period defined by a first time point, a plurality of first video segments to be synthesized that do not include the video segment to be deleted are obtained according to the first time point and start and end time points of the video segment to be deleted.
In step S1042, a plurality of first video clips to be synthesized are subjected to synthesizing processing, and video clips are obtained.
In this implementation, the marked time points included in the first time period may be identified to determine whether there is a video segment to be deleted in the first time period. The extracted video clips are ensured not to contain the video clips to be deleted, the continuity of the video clips is ensured, and the user can check the video clips conveniently.
In this implementation, the merging processing performed on the plurality of first video segments to be synthesized may be performed according to a time sequence in which the first video segments to be synthesized appear in the target video, so that the plurality of first video segments to be synthesized in the video segments can appear in sequence according to a playing sequence thereof in the target video, which is convenient for a user to understand contents of the video segments. And a plurality of first video segments to be synthesized can be combined together according to a set sequence, so that the interestingness of the video segments is enhanced, and the personalized requirements of users are met.
Fig. 3 shows a flowchart of step S104 in a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 3, the selected first time points may include a plurality of pairs of first time points, and step S104 may include step S1043 and step S1044.
In step S1043, second video segments to be synthesized corresponding to each pair of first time points are respectively obtained.
In step S1044, a plurality of second video clips to be synthesized are synthesized, so as to obtain video clips.
In this implementation, a plurality of second video segments to be synthesized may have a certain common characteristic therebetween, so that the obtained video segments can meet the viewing requirements of the user for the common characteristic. The common feature may be that its video content is similar or identical, and may also be that the characters or persons appearing in its segment are equivalent. For example, the plurality of second video clips to be composed may be a plurality of clips in the target video in which role a appears, and the plurality of second video clips to be composed may be a plurality of clips in the target video in which songs perform. The plurality of second video segments to be synthesized may also be any segments in the target video, which is not limited by the present disclosure.
In this implementation, a second segment extraction operation control for segment extraction may be displayed in the processing interface, and when a trigger operation such as clicking, double-clicking, or touching on the second segment extraction operation control is detected, the selected pairs of first time points are determined from the marked time points by combining with the detected operation of selecting time points before or after the trigger operation. And then, respectively acquiring second video clips to be synthesized corresponding to each pair of first time points, and synthesizing the plurality of second video clips to be synthesized to obtain the video clips.
In this implementation manner, the merging processing performed on the plurality of second video segments to be synthesized may be performed according to a time sequence in which the second video segments to be synthesized appear in the target video, so that the plurality of second video segments to be synthesized in the video segments can appear in sequence according to the playing sequence of the second video segments in the target video, which is convenient for a user to understand the content in the video segments. And a plurality of second video segments to be synthesized can be combined together according to a set sequence, so that the interest of the video segments is enhanced, and the personalized requirements of users are met.
In the implementation manner, an entire file merging control for merging an entire file of the entire target video can be displayed in the processing interface, when trigger operations such as clicking and double-clicking on the entire file merging control are detected, entire file extraction is performed on the target video, and an entire file video clip not containing a video clip to be deleted is extracted from the target video. Therefore, the operation of extracting the whole video clip is simplified, and the time for processing the video is saved.
By the method, the nonadjacent second video clips to be synthesized in the target video can be extracted, and the required video clips are obtained after the merging processing, so that the continuous watching requirement of a user on the second video clips to be synthesized can be met.
Fig. 4 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 4, the method may further include step S105.
In step S105, upon detecting a section deletion operation for the target video, a video section to be deleted is determined according to the start time point and the end time point of the selected video section to be deleted among the marker time points.
In this implementation, a segment deletion operation control for segment deletion may be displayed in the processing interface, and when a trigger operation such as clicking, double-clicking, touching, or the like is detected for the segment deletion operation control, a start time point and an end time point of a selected video segment to be deleted are determined from the mark time points in combination with a detected operation of selecting a time point before or after the trigger operation. And then, determining the video clip to be deleted according to the starting time point and the ending time point of the selected video clip to be deleted.
In this implementation manner, when a trigger operation such as clicking, double clicking, touching, or the like for the segment deletion operation control is detected, in combination with an operation of selecting a video segment to be deleted, which is detected before or after the trigger operation, the selected video segment to be deleted is determined as a video segment that does not need to be deleted. Therefore, the video clip to be deleted can be modified at any time according to the requirement, and the requirements of extracting different video clips are met. The control for segment deletion recovery may also be separately set to implement the above process, which is not limited by this disclosure.
In this implementation, when a trigger operation such as clicking, double clicking, touching, or the like for the segment deletion operation control is detected, the start time point and the end time point of the selected video segment to be deleted are modified in conjunction with a modification operation that selects the start time point and the end time point of the video segment to be deleted, which is detected before or after the trigger operation. Therefore, the starting time point and the ending time point of the video clip to be deleted can be modified at any time according to needs.
In this implementation, after determining the video segment to be deleted, the video segment to be deleted may be directly deleted from the target video. And deleting marks can be only carried out on the video segments to be deleted in the target video, so that in the process of extracting the video segments, the video segments to be deleted can be determined according to the deleting marks, and the video segments which do not contain the video segments to be deleted are extracted from the target video.
In this implementation, step S105 may be performed after, before, and after step S104, which is not limited by the present disclosure.
In this embodiment, when a video segment to be deleted existing within a first time period defined by a first time point changes, a new video segment is obtained according to the changed video segment to be deleted, and corresponding segment update is performed based on the new video segment, so that the video segment does not always have the video segment to be deleted.
Fig. 5 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 5, the method may further include step S106.
In step S106, when a tag setting operation for a selected second time point of the marked time points is detected, a tag is added to the second time point, where the tag may include at least one of a category and video content corresponding to the second time point.
In this implementation, the category corresponding to the second time point may be determined according to the nature of the marked time point and the corresponding video content, for example, a head and a tail. The label may be by text, picture or a combination of both. The picture may be a certain video frame selected from the target video, or may be another picture capable of representing the feature of the second time point, which is not limited by the present disclosure.
In this implementation, a tab setting operation control for performing a tab setting operation may be provided in the processing interface, and when a trigger operation such as a click or a double click on the tab setting operation control is detected, the selected second time point may be determined from the marked time points in combination with an operation of selecting a time point detected before or after the trigger operation. Then, a label is added to the second point in time based on the detected input data.
In this implementation manner, in a case that a marking operation for a playing time point of the target video is detected, after the marking time point of the target video is determined, a tag marking reminder may be sent to remind a video processor to perform tag setting on the marking time point, and a tag may be added to the second time point according to the detected input data. .
Through the mode, the labels are added to part or all of the marked time points, so that video processing personnel can conveniently perform subsequent processing on the target video. Meanwhile, in the process of watching the video clip, the user can understand the video content of the video clip based on the label marking the time point and adjust the playing progress of the video clip according to the label and the preference of the user.
Fig. 6 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 6, the method may further include step S107 and step S108.
In step S107, the video segment is transcoded to obtain a transcoded video segment.
In step S108, the transcoded video clip is sent to the cloud server.
In this implementation, the transcoded video segment may also be uploaded to a server, so that a user may view the video segment through the server or a cloud server where the video segment is located (hereinafter, "uploading" refers to a process of uploading the transcoded video segment to the cloud server or the server for convenience of description). The extracted video segments can be transcoded respectively, and then the transcoded video segments are sent to a cloud server or a server. The whole target video can be transcoded in advance, and then after the video segments are obtained, corresponding video segments are extracted from the target video which is completely transcoded or partially transcoded. And if all the extracted segments are transcoded, sending the transcoded video segments to a cloud server or a server. If the extracted segment is transcoded partially and part of the extracted segment is not transcoded, transcoding the extracted segment is continuously performed, and then the obtained transcoded video segment is sent to a cloud server or a server. In this way, the speed of sending the transcoded video segment to the cloud server or the server can be increased.
For example, a transcoding control for video transcoding may be displayed in the processing interface, and when trigger operations such as clicking and double-clicking on the transcoding control are detected, the selected video segment or target video is transcoded in combination with the detected selection operation of selecting the video segment or target video between or after the trigger operations, so as to obtain a transcoded video segment or transcoded target video.
In this implementation, step S107 and step S108 may be directly performed after obtaining the video clip of the target video based on detecting the clip extraction operation for the target video. Step S107 and step S108 may also be executed after a transcoding operation for a certain video segment is detected.
Fig. 7 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 7, the method may further include step S109.
In step S109, video information of the video clip is presented in the task list of the processing interface. Wherein the video information may include at least one of: the name of the video clip, the duration of the video clip, the processing status of the video clip, the generation time of the video clip, and the identification of the video clip.
The name of the video clip may be a word or a sentence determined according to the video content of the video clip. For example, the shoddy program ssxd. The duration of the video segment may be the duration of the play of the video segment. The processing state of the video segment may include a transcoding state indicating whether the video segment is transcoded and whether transcoding is completed, an uploading state indicating whether the video segment is uploaded (sent to a cloud server or a server) and whether uploading is completed, and an updating state indicating whether the video segment is updated. Whether the processing state is abnormal or not and the specific situation of the processing state can be shown through the indication icon. For example, when the indication icon corresponding to the video clip is blue, it indicates that the processing state of the video clip is normal. And when the indication icon corresponding to the video clip is red, indicating that the processing state of the video clip is abnormal. And when the indication icon corresponding to the video clip is green, indicating that the video clip is uploaded successfully. The generation time of the video clip can be the time when the video clip is extracted, and can also be the time when the video clip is successfully uploaded. The title of the video clip may be a number, an ID, or the like of the video clip that can be distinguished from other video clips. The video information displayed in the task list can be set by those skilled in the art according to actual needs, and the disclosure does not limit this.
In the implementation manner, when the number of the video information of the video clip in the task list exceeds the display number threshold of the task list in the processing interface, the redundant video information can be hidden, and a display adjustment control which can be operated by a video processing person is provided at a side position and the like of the task list, so that the video information of the corresponding video clip is displayed in the task list according to the operation aiming at the display adjustment control and the position of the display adjustment control. Wherein the display number threshold is set according to the size of the area occupied by the task list and the size required for displaying the video information.
In this implementation, upon detection of a display update operation for the task list, video information for each video clip in the task list is updated. The list updating operation control can be displayed in a display area of the task list, so that video processing personnel can operate the control to update the video information of each video clip in the task list.
In this implementation, fig. 7 shows only an example of an execution sequence of step S109, which may also be executed before or after any step of step S101 to step S104, and the present disclosure does not limit this.
Fig. 8 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 8, the method may further include step S110.
In step S110, the time axis of the target video is presented in the processing interface. The time axis may include at least one of a time point, a tag for marking the time point, a position of a current video frame shown in the first play window in the time axis, a time period corresponding to the video clip, and a time period corresponding to the video clip to be deleted, and the tag may include at least one of a category and video content corresponding to the time point.
In this implementation, the playing time of the corresponding target video may be displayed in the time axis. The position of the current video frame in the first playing window in the time axis can be shown through the position identification in the time axis. The position identification can change the position of the video frame in the time axis along with the change of the video frame in the first playing window. When a processing operation (including an operation of processing the target video such as a marking operation, a clip extraction operation, a clip deletion operation, a tag setting operation, and the like mentioned herein) for the target video is detected, a corresponding time point may be determined according to the position identification, and a corresponding time point may also be determined according to a selection operation such as a detected click on the time axis. For example, when the position mark is at a position of 3:30 in the time axis, if a marking operation is detected, 3:30 may be determined as a marking time point. The display mode of the time axis can be set by those skilled in the art according to actual needs, and the present disclosure does not limit this.
In this implementation, to facilitate the operation of the video processing person, part of the content of the time axis may be displayed in the processing interface, the rest of the content is hidden, and a time axis display adjustment control for changing the display content of the time axis in the processing interface is displayed in the lower part of the time axis. And when the adjusting operation of the time axis display adjusting control is detected, changing the display content of the time axis in the processing interface.
In this implementation, fig. 8 shows only an example of an execution sequence of step S110, which may also be executed before or after any step of step S101 to step S104, and the present disclosure does not limit this.
Fig. 9 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 9, the method may further include step S111.
In step S111, the playing progress of the target video in the first playing window is adjusted according to the detected playing progress adjusting operation. The play progress adjusting operation may include any one of the following: start/pause play operations, advance one or more video clip operations, retreat one or more video clip operations, advance one or more video frame operations, retreat one or more video frame operations, advance one or more video key frame operations, retreat one or more video key frame operations, advance one or more preset time period operations, retreat one or more preset time period operations, one or more video frame operations at the beginning of the play target video, and one or more video frame operations at the end of the play target video.
In this implementation manner, a play progress adjustment operation control for the play progress adjustment operation may be displayed in the processing interface, and when the corresponding play progress adjustment operation control is triggered, the play progress of the target video is adaptively adjusted. Those skilled in the art can set the play progress adjustment operation according to actual needs, which is not limited by the present disclosure.
In this implementation, fig. 9 shows only an example of one execution sequence of step S111, which may also be executed at any time after step S102, and the present disclosure does not limit this.
Fig. 10 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 10, the method may further include step S112.
In step S112, time information related to the target video is presented in the first play window. Wherein the time information may include at least one of: the relative time of the current video frame shown in the first playing window in the target video, the duration of the target video, the actual time of the current video frame in the live video corresponding to the target video and the actual time of the current moment.
The relative time of the current video frame shown in the first playing window appearing in the target video may be a playing time corresponding to the current video frame in the target video. For example, the relative time is 00:00: 50. The actual time when the current video frame appears in the live video corresponding to the target video may be the actual time when the current video frame appears in the live video, for example, the actual time is beijing time 19:46: 45. The content of the time information and the display mode thereof can be set by those skilled in the art according to actual needs, and the disclosure does not limit this.
In this implementation, fig. 10 shows only an example of an execution sequence of step S112, which may also be executed at any time after step S102, and the disclosure does not limit this.
By the mode, video processing personnel can perform adaptive adjustment on video processing according to the time information.
Fig. 11 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 11, the method may further include step S113 and step S114.
In step S113, the target video is played in a second playing window of the processing interface, and the playing progress of the second playing window is kept as the real-time recording progress of the target video.
In this implementation, the real-time recording schedule of the target video may be the most recently recorded portion of the live video obtained. The video processing personnel can preliminarily know the video content of the target video according to the target video played in the second playing window. On this basis, video processing personnel can carry out more accurate processing to the target video of broadcast in the first playing window based on the video content of preliminary understanding, improve the degree of accuracy and the speed of carrying out video processing.
In step S114, the actual time corresponding to the real-time recording progress is displayed in the second playing window.
In this implementation, the actual time corresponding to the real-time recording progress may be an actual time of a video frame currently shown in the second playing window appearing in the live video, for example, the actual time corresponding to the real-time recording progress may be beijing time 19:47: 01. And determining video content corresponding to video frames which may appear later in the first playing window by video processing personnel according to the actual time corresponding to the real-time recording progress and the actual time of the current video frame appearing in the live video corresponding to the target video.
In this implementation manner, the playing of the target video may also be controlled according to operations such as muting, window size adjustment, updating of the target video, and the like, which are detected for the target video played in the second playing window. Updating the target video may refer to acquiring a latest target video (a recorded part of a live video) according to a detected target video updating operation. In this way, it can be ensured that the target video in the second playing window is the latest recorded video. An update target video operation control for the update target video can be displayed in the second playing window, and when the trigger operation such as clicking and double-clicking of the control is detected, the target video is updated.
In this implementation manner, fig. 11 shows only an example of an execution sequence of step S113 and step S114, which may also be executed at any time after step S101, and step S113 and step S114 may be executed simultaneously or sequentially, which is not limited by this disclosure.
Through the mode, the target video is played for the video processing personnel in the second playing window, and the playing progress of the second playing window is kept as the real-time recording progress of the target video, so that the video processing personnel can preview based on the second playing window before processing the target video, and the efficiency, the speed and the accuracy of video processing are improved.
In this embodiment, in order to facilitate the processing operation of the target video by the video processing person, the above-mentioned controls corresponding to the corresponding operation are provided in the processing interface, and the shape, size, position, identifier (identifier representing the control function), corresponding trigger operation, shortcut, and the like of each control may be set according to the use requirement of the video processing person, which is not limited in this disclosure.
It should be noted that, although the video processing method is described above by taking the above-mentioned embodiment as an example, those skilled in the art can understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each step according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Application example
An application example according to the embodiment of the present disclosure is given below in conjunction with "video processing person performs target video processing" as an exemplary application scenario to facilitate understanding of the flow of the video processing method. It is to be understood by those skilled in the art that the following application examples are for the purpose of facilitating understanding of the embodiments of the present disclosure only and are not to be construed as limiting the embodiments of the present disclosure.
Fig. 12 is a schematic diagram illustrating an application scenario of a video processing method according to an embodiment of the present disclosure. As shown in fig. 12, a processing interface M capable of processing a target video is provided for a video processing person according to the above-described video processing method. The target video is a recorded part of a certain live television evening in the process of recording the live television evening. The following is included in this processing interface M.
First play window T1
The first play window T1 is used to play the target video. Time information related to the target video, a relative time (e.g., "00: 00: 30" shown in the drawing) at which the current video frame shown in the first play window T1 appears in the target video, a time duration 2 (e.g., "08: 54: 39" shown in the drawing), an actual time (e.g., "19: 20: 32" shown in the drawing) at which the current video frame appears in the live video corresponding to the target video, and an actual time (e.g., "19: 24: 39" shown in the drawing) at the current time may be sequentially displayed at a lower portion of the first play window T1. Also, a start/pause play operation control 1, a volume adjustment control 2, a definition adjustment control 3, and an update target video operation control 4 may be displayed at the lower portion of the first play window T1.
Second Play Window T2
The second playing window T2 is used for playing the target video, and the playing progress of the second playing window T2 is kept as the real-time recording progress of the target video. Meanwhile, an actual time (e.g., "19: 24: 32" as shown in the figure) corresponding to the real-time recording progress may be presented in the second play window T2. And, in the second play window T2, a full-screen display control 5 (the second window is displayed in full screen after being triggered), an update target video operation control 6 (the update target video after being triggered), and a mute operation control 7 (the sound of the target video in the second play window T2 is turned off after being triggered) are presented.
Task list R
The video information of the video clip can be presented in the task list R. The document preparation task is to extract the whole target video and no video clip of the target video, wherein the whole target video and no video clip to be deleted can be extracted by a pointer. The strip splitting task may refer to extracting video segments of the acquired target video for one or more segments in the target video. The video information includes the name of the video clip (e.g., "video clip V1" shown in the figure), the duration of the video clip (e.g., "duration |00: 15" shown in the figure), the processing state of the video clip (e.g., "transcoding," "uploading," "updating"), the generation time of the video clip (e.g., "2017-12-1219: 30" shown in the figure), and the identification of the video clip (e.g., "video ID: 14357334" shown in the figure), and an indication icon 13 for indicating the processing state.
In addition, a list updating operation control 14 (which updates the video information of each video clip in the task list when triggered) and a display adjustment control 15 (which displays the video information of the corresponding video clip in the display area of the task list R according to the position of the corresponding video clip when triggered) can be displayed on the right side of the task list R.
The processing operation area includes: time axis S, playing progress adjusting operation area J, time point marking operation area D and clip processing area C
A marked time point (e.g., marked time point 8 shown in the figure), a label of the marked time point (e.g., label "start" in the figure), a position of the current video frame shown in the first play window T1 in the time axis S (e.g., time point 9 shown in the figure), a time period corresponding to the video clip (e.g., time period 10 shown in the figure), and a time period corresponding to the video clip to be deleted (e.g., time period 11 shown in the figure) are displayed in the time axis S. And displaying a timeline display adjustment control 12 in the timeline S (corresponding to adjusting the display of the timeline in the processing interface as it is dragged left or right). The description of the time axis S is described in detail in step S110.
The play progress adjustment operation area J displays a video clip forward operation control (such as a "previous segment" control in the figure), a video clip backward operation control (such as a "next segment" control in the figure), a video frame forward operation control (such as a "frame forward" control in the figure), a video frame backward operation control (such as a "frame backward" control in the figure), and a video key frame forward operation control (such as an "upper key frame" control in the figure), the video playing method comprises the following steps of backing a video key frame operation control (such as a 'next key frame' control in the figure), advancing a preset time period operation control (such as a 'one second' control in the figure), backing a preset time period operation control (such as a 'one second' back control in the figure), playing a video frame operation control at the beginning of a target video (such as a 'first frame' control in the figure) and playing a video frame operation control at the end of the target video (such as a 'last frame' control in the figure).
The time point marking operation area D displays a first marking operation control (e.g., "dotting (R)" control in the figure) corresponding to the clip extraction time point of the target video and a second marking operation control (e.g., "set as special point" control in the figure) corresponding to the target video, including a viewpoint time point, a title end time point, a title start time point, a title end time point, start time points of clips, start time points and end time points of clips inserted in the creative, start time points and end time points of video clips to be deleted, and the like. When the 'dotting (R)' control is triggered, the time point determined when the 'dotting (R)' control is triggered, after or before the 'dotting (R)' control is triggered is determined as the segment extraction time point of the target video. When the 'set to special point' control is triggered, the selected time point determined when the control is triggered, after the control is triggered or before the control is triggered is determined as the marking time point (except the segment extraction time point) of the target video, and the video processing personnel adds the label to the point according to the label setting operation of the video processing personnel aiming at the point. And, the marked time points and their labels are displayed in the time axis S. Here, the time point at which the current video frame shown in the first play window T1 is located may be determined as the selected time point when the "dotting (R)" control or the "set to special point" control is triggered. The selected time point can also be determined according to the detected selection operation such as clicking on the time axis S.
The segment processing area C shows a segment deleting operation control (e.g., "delete (D)" control in the figure), a first segment extracting operation control (e.g., "split bar (a)" control in the figure), a second segment extracting operation control "merge split bar (B)" control in the figure), and a transcoding control (e.g., "online whole file" control and "pre-submit" control in the figure).
When the "delete (D)" control is triggered, the video clip to be deleted is determined according to the start time point and the end time point of the video clip to be deleted, which are determined before or after the control is triggered and are selected from the marking time points, and the time period corresponding to the determined video clip to be deleted is marked in the time axis S (for example, the time period 24 in the figure).
When the 'strip (A)' control is triggered, a video clip (such as the video clip V2) is obtained according to a first time point selected from the marking time points and determined before or after the control is triggered, and a time period corresponding to the determined video clip V2 is marked in the time axis S (such as the time period 23 in the figure). Then, after the video segment V2 is transcoded successfully, the transcoded video segment V2 is sent to the cloud server.
When the merge-and-tear bar (B) is triggered, the video segment (e.g. video segment V3) is obtained according to the pairs of first time points selected from the marking time points determined before or after the control is triggered. Then, after the video segment V3 is transcoded successfully, the transcoded video segment V3 is sent to the cloud server.
When the 'online whole file' control is triggered, a whole file video segment (such as the video segment V1) which does not contain the video segment to be deleted is extracted from the target video. Then, after the video segment V1 is transcoded successfully, the transcoded video segment V1 is sent to the cloud server.
And when the pre-submission control is triggered, transcoding the target video so as to acquire the transcoded video fragment from the transcoded target video after acquiring the video fragment.
Therefore, through the processing interface, the video processing personnel can rapidly process the target video and acquire the required video clip.
The live video can be converted into the video clip for the user to view on demand, the efficiency of extracting the video clip is high, the speed is high, the accuracy is high, the operation process of obtaining the video clip is simplified, and the online time of the video clip is shortened.
Fig. 13 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure. As shown in fig. 13, the method may be applied to a terminal (e.g., a computer) and a server (e.g., a cloud server). The device comprises a live video recording module 501, a first video playing module 502, a time point marking module 503 and a video segment extracting module 504. The live video recording module 501 is configured to take a recorded portion of a live video as a target video during recording of the live video. The first video playback module 502 is configured to play the target video in a first playback window of the processing interface. The time point marking module 503 is configured to determine a marking time point of the target video when a marking operation for a playing time point of the target video is detected. The video segment extraction module 504 is configured to, when a segment extraction operation for the target video is detected, obtain a video segment of the target video according to a selected first time point of the marked time points.
Fig. 14 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
In one possible implementation, as shown in fig. 14, the video clip extraction module 504 may include a first to-be-synthesized clip extraction sub-module 5041 and a first synthesis processing sub-module 5042. The first to-be-synthesized section extraction sub-module 5041 is configured to, when there is a video section to be deleted within a first time period defined by a first time point, obtain a plurality of first to-be-synthesized video sections not containing the video section to be deleted, based on the first time point and start and end time points of the video section to be deleted. The first synthesis processing sub-module 5042 is configured to perform synthesis processing on a plurality of first video segments to be synthesized to obtain video segments.
In one possible implementation, as shown in fig. 14, the video clip extraction module 504 may include a second to-be-synthesized clip extraction sub-module 5043 and a second synthesis processing sub-module 5044. The second to-be-synthesized segment extraction sub-module 5043 is configured to obtain a second to-be-synthesized video segment corresponding to each pair of the first time points, respectively. The second synthesis processing sub-module 5044 is configured to perform synthesis processing on a plurality of second video segments to be synthesized, so as to obtain video segments.
In one possible implementation, as shown in fig. 14, the apparatus may further include a to-be-deleted video clip determining module 505. The to-be-deleted video segment determining module 505 is configured to, when a segment deletion operation for the target video is detected, determine a to-be-deleted video segment according to a start time point and an end time point of a selected to-be-deleted video segment among the marked time points.
In one possible implementation, as shown in fig. 14, the apparatus may further include a tag setting module 506. The tag setting module 506 is configured to, when detecting a tag setting operation for a selected second time point of the marked time points, add a tag to the second time point, where the tag includes at least one of a category and video content corresponding to the second time point.
In one possible implementation, as shown in fig. 14, the apparatus may further include a video segment transcoding module 507 and a video segment sending module 508. The video segment transcoding module 507 is configured to transcode the video segment to obtain a transcoded video segment. The video segment sending module 508 is configured to send the transcoded video segment to the cloud server.
In one possible implementation, as shown in fig. 14, the apparatus may further include a video information presentation module 509. The video information presentation module 509 is configured to present video information of the video clip in a task list of the processing interface. Wherein the video information may include at least one of: the name of the video clip, the duration of the video clip, the processing status of the video clip, the generation time of the video clip, and the identification of the video clip.
In one possible implementation, as shown in fig. 14, the apparatus may further include a timeline presentation module 510. The timeline presentation module 510 is configured to present a timeline of a target video in a processing interface. The time axis may include at least one of a time point, a tag for marking the time point, a position of a current video frame shown in the first play window in the time axis, a time period corresponding to the video clip, and a time period corresponding to the video clip to be deleted, where the tag includes at least one of a category corresponding to the time point and video content.
In one possible implementation, as shown in fig. 14, the apparatus may further include a play progress adjustment module 511. The playing progress adjusting module 511 is configured to adjust the playing progress of the target video in the first playing window according to the detected playing progress adjusting operation. The playing progress adjusting operation may include any one of the following: start/pause play operations, advance one or more video clip operations, retreat one or more video clip operations, advance one or more video frame operations, retreat one or more video frame operations, advance one or more video key frame operations, retreat one or more video key frame operations, advance one or more preset time period operations, retreat one or more preset time period operations, one or more video frame operations at the beginning of the play target video, and one or more video frame operations at the end of the play target video.
In one possible implementation, as shown in fig. 14, the apparatus may further include a time information presentation module 512. The time information presentation module 512 is configured to present time information related to the target video in the first play window. Wherein the time information may include at least one of: the relative time of the current video frame shown in the first playing window in the target video, the duration of the target video, the actual time of the current video frame in the live video corresponding to the target video and the actual time of the current moment.
In a possible implementation manner, as shown in fig. 14, the apparatus may further include a second video playing module 513 and a recording progress showing module 514. The second video playing module 513 is configured to play the target video in a second playing window of the processing interface, and maintain the playing progress of the second playing window as the real-time recording progress of the target video. The recording progress presentation module 514 is configured to present an actual time corresponding to the real-time recording progress in the second playing window.
In one possible implementation, the marking of the time point may include at least one of: the method comprises the steps of extracting a target video clip, a target video viewpoint time point, a title end time point, a title start time point, a title end time point, a start time point and an end time point of creative insertion, and a video clip to be deleted. Wherein the video segment is tagged with a third point in time, which may include tagged points in time within a time period corresponding to the video segment.
It should be noted that, although the video processing apparatus is described above by taking the above-described embodiment as an example, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each module according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
The video processing device provided by the embodiment of the disclosure takes a recorded part of a live video as a target video in the process of recording the live video; playing the target video in a first playing window of the processing interface; when a marking operation aiming at the playing time point of the target video is detected, determining the marking time point of the target video; when the segment extraction operation aiming at the target video is detected, the video segment of the target video is obtained according to the selected first time point in the marked time points. The live video can be converted into the video clip for the user to view on demand, the efficiency of extracting the video clip is high, the speed is high, the accuracy is high, the operation process of obtaining the video clip is simplified, and the online time of the video clip is shortened.
Fig. 15 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 15, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
Fig. 16 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 16, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (26)

1. A method of video processing, the method comprising:
in the process of recording a live video, acquiring a recorded part of the live video in real time, taking the recorded part of the live video as a target video, and continuously and automatically updating the target video according to the recorded part of the live video;
playing the target video in a first playing window of a processing interface;
when a marking operation aiming at the playing time point of the target video is detected, determining the marking time point of the target video, wherein the marking operation comprises segment extraction, segment deletion and label setting;
when the segment extraction operation aiming at the target video is detected, obtaining the video segment of the target video according to the selected first time point in the marked time points;
and playing the target video in a second playing window of the processing interface, and keeping the playing progress of the second playing window as the real-time recording progress of the target video, wherein an updating control is preset in the second playing window.
2. The method according to claim 1, wherein obtaining the video segment of the target video according to the selected first time point of the marked time points when detecting the segment extracting operation for the target video comprises:
when video clips to be deleted exist in a first time period defined by the first time point, obtaining a plurality of first video clips to be synthesized which do not contain the video clips to be deleted according to the first time point and the starting time point and the ending time point of the video clips to be deleted;
and synthesizing the plurality of first video clips to be synthesized to obtain the video clips.
3. The method of claim 1, wherein the selected first time points comprise a plurality of pairs of first time points, and wherein obtaining the video segments of the target video according to the selected first time points of the tagged time points comprises:
respectively acquiring a second video clip to be synthesized corresponding to each pair of first time points;
and synthesizing the plurality of second video clips to be synthesized to obtain the video clips.
4. The method of claim 1, further comprising:
and when the segment deleting operation aiming at the target video is detected, determining the video segment to be deleted according to the starting time point and the ending time point of the selected video segment to be deleted in the marked time points.
5. The method of claim 1, further comprising:
when detecting a label setting operation for a selected second time point in the marked time points, adding a label to the second time point, wherein the label comprises at least one of a category and video content corresponding to the second time point.
6. The method of claim 1, further comprising:
transcoding the video segment to obtain a transcoded video segment;
and sending the transcoded video clip to a cloud server.
7. The method of claim 1, further comprising:
displaying video information of the video clip in a task list of the processing interface,
wherein the video information comprises at least one of: the name of the video clip, the duration of the video clip, the processing state of the video clip, the generation time of the video clip, and the identification of the video clip.
8. The method of claim 1, further comprising:
displaying a timeline of the target video in the processing interface,
the time axis comprises at least one of the marking time point, a label of the marking time point, a position of a current video frame displayed in the first playing window in the time axis, a time period corresponding to the video clip and a time period corresponding to the video clip to be deleted, and the label comprises at least one of a category corresponding to the marking time point and video content.
9. The method of claim 1, further comprising:
adjusting the playing progress of the target video in the first playing window according to the detected playing progress adjusting operation,
wherein the play progress adjustment operation includes any one of: start/pause play operations, advance one or more video clip operations, retreat one or more video clip operations, advance one or more video frame operations, retreat one or more video frame operations, advance one or more video key frame operations, retreat one or more video key frame operations, advance one or more preset time period operations, retreat one or more preset time period operations, play one or more video frame operations at the beginning of the target video, and play one or more video frame operations at the end of the target video.
10. The method of claim 1, further comprising:
presenting time information related to the target video in the first play window,
wherein the time information comprises at least one of: the relative time of the current video frame shown in the first playing window in the target video, the duration of the target video, the actual time of the current video frame in the live video corresponding to the target video and the actual time of the current moment.
11. The method of claim 1, further comprising:
and displaying the actual time corresponding to the real-time recording progress in the second playing window.
12. The method of claim 1, wherein the marking the time points comprises at least one of: the clip extraction time point of the target video, the viewpoint time point, the leader end time point, the trailer start time point, the start time point and the end time point of the creative insertion, and the start time point and the end time point of the video clip to be deleted,
wherein the video segment is tagged with a third point in time, the third point in time comprising a tagged point in time within a time period corresponding to the video segment.
13. A video processing apparatus, characterized in that the apparatus comprises:
the live video recording module is used for acquiring a recorded part of a live video in real time in the process of recording the live video, taking the recorded part of the live video as a target video and continuously and automatically updating the target video according to the recorded part of the live video;
the first video playing module plays the target video in a first playing window of the processing interface;
the time point marking module is used for determining the marking time point of the target video when the marking operation aiming at the playing time point of the target video is detected, wherein the marking operation comprises segment extraction, segment deletion and label setting;
the video clip extraction module is used for obtaining a video clip of the target video according to the selected first time point in the marked time points when the clip extraction operation aiming at the target video is detected;
and the second video playing module plays the target video in a second playing window of the processing interface and keeps the playing progress of the second playing window as the real-time recording progress of the target video, wherein an updating control is preset in the second playing window.
14. The apparatus of claim 13, wherein the video segment extraction module comprises:
a first to-be-synthesized segment extraction sub-module, configured to, when a to-be-deleted video segment exists within a first time period defined by the first time point, obtain, according to the first time point and start and end time points of the to-be-deleted video segment, a plurality of first to-be-synthesized video segments that do not include the to-be-deleted video segment;
and the first synthesis processing sub-module is used for synthesizing the plurality of first video clips to be synthesized to obtain the video clips.
15. The apparatus of claim 13, wherein the selected first time points comprise a plurality of pairs of first time points, and wherein the video segment extraction module comprises:
the second to-be-synthesized segment extraction sub-module is used for respectively acquiring second to-be-synthesized video segments corresponding to each pair of first time points;
and the second synthesis processing submodule is used for synthesizing a plurality of second video clips to be synthesized to obtain the video clips.
16. The apparatus of claim 13, further comprising:
and the to-be-deleted video clip determining module is used for determining the to-be-deleted video clip according to the starting time point and the ending time point of the selected to-be-deleted video clip in the marked time points when the clip deletion operation aiming at the target video is detected.
17. The apparatus of claim 13, further comprising:
and the label setting module is used for adding a label to a second time point when detecting label setting operation aiming at the selected second time point in the marked time points, wherein the label comprises at least one of a category and video content corresponding to the second time point.
18. The apparatus of claim 13, further comprising:
the video segment transcoding module is used for transcoding the video segment to obtain a transcoded video segment;
and the video segment sending module is used for sending the transcoded video segment to a cloud server.
19. The apparatus of claim 13, further comprising:
a video information display module for displaying the video information of the video clip in the task list of the processing interface,
wherein the video information comprises at least one of: the name of the video clip, the duration of the video clip, the processing state of the video clip, the generation time of the video clip, and the identification of the video clip.
20. The apparatus of claim 13, further comprising:
a timeline presentation module that presents a timeline of the target video in the processing interface,
the time axis comprises at least one of the marking time point, a label of the marking time point, a position of a current video frame shown in the first playing window in the time axis, a time period corresponding to the video clip and a time period corresponding to the video clip to be deleted, and the label comprises at least one of a category corresponding to the marking time point and video content.
21. The apparatus of claim 13, further comprising:
a playing progress adjusting module for adjusting the playing progress of the target video in the first playing window according to the detected playing progress adjusting operation,
wherein the play progress adjustment operation includes any one of: start/pause play operations, advance one or more video clip operations, retreat one or more video clip operations, advance one or more video frame operations, retreat one or more video frame operations, advance one or more video key frame operations, retreat one or more video key frame operations, advance one or more preset time period operations, retreat one or more preset time period operations, play one or more video frame operations at the beginning of the target video, and play one or more video frame operations at the end of the target video.
22. The apparatus of claim 13, further comprising:
a time information presentation module that presents time information related to the target video in the first play window,
wherein the time information comprises at least one of: the relative time of the current video frame shown in the first playing window in the target video, the duration of the target video, the actual time of the current video frame in the live video corresponding to the target video and the actual time of the current moment.
23. The apparatus of claim 13, further comprising:
and the recording progress display module is used for displaying the actual time corresponding to the real-time recording progress in the second playing window.
24. The apparatus of claim 13, wherein the time-stamping comprises at least one of: the clip extraction time point of the target video, the viewpoint time point, the leader end time point, the trailer start time point, the start time point and the end time point of the creative insertion, and the start time point and the end time point of the video clip to be deleted,
wherein the video segment is tagged with a third point in time, the third point in time comprising a tagged point in time within a time period corresponding to the video segment.
25. A video processing apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 12.
26. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 12.
CN201810600390.4A 2018-06-12 2018-06-12 Video processing method and device Active CN110602560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810600390.4A CN110602560B (en) 2018-06-12 2018-06-12 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810600390.4A CN110602560B (en) 2018-06-12 2018-06-12 Video processing method and device

Publications (2)

Publication Number Publication Date
CN110602560A CN110602560A (en) 2019-12-20
CN110602560B true CN110602560B (en) 2022-05-17

Family

ID=68848821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810600390.4A Active CN110602560B (en) 2018-06-12 2018-06-12 Video processing method and device

Country Status (1)

Country Link
CN (1) CN110602560B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212321A (en) * 2020-01-10 2020-05-29 上海摩象网络科技有限公司 Video processing method, device, equipment and computer storage medium
CN111327855B (en) * 2020-03-10 2022-08-05 网易(杭州)网络有限公司 Video recording method and device and video positioning method and device
CN111506239A (en) * 2020-04-20 2020-08-07 聚好看科技股份有限公司 Media resource management equipment and display processing method of label configuration component
CN111541933A (en) * 2020-05-09 2020-08-14 北京奇艺世纪科技有限公司 Video playing method and device, electronic equipment and storage medium
CN111711837B (en) * 2020-05-14 2023-01-10 北京奇艺世纪科技有限公司 Target video changing method and device, electronic equipment and computer readable medium
CN112218150A (en) * 2020-10-15 2021-01-12 Oppo广东移动通信有限公司 Terminal and video analysis display method and device thereof
CN112911332B (en) * 2020-12-29 2023-07-25 百度在线网络技术(北京)有限公司 Method, apparatus, device and storage medium for editing video from live video stream
CN113268180A (en) * 2021-05-14 2021-08-17 北京字跳网络技术有限公司 Data annotation method, device, equipment, computer readable storage medium and product
CN113556602B (en) * 2021-07-21 2023-11-17 广州博冠信息科技有限公司 Video playing method and device, storage medium and electronic equipment
CN113709526B (en) * 2021-08-26 2023-10-20 北京高途云集教育科技有限公司 Teaching video generation method and device, computer equipment and storage medium
CN115002529A (en) * 2022-05-07 2022-09-02 咪咕文化科技有限公司 Video strip splitting method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525746B1 (en) * 1999-08-16 2003-02-25 University Of Washington Interactive video object processing environment having zoom window
CN107608601A (en) * 2017-08-28 2018-01-19 维沃移动通信有限公司 A kind of video playback method, mobile terminal and computer-readable recording medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8230475B2 (en) * 2007-11-16 2012-07-24 At&T Intellectual Property I, L.P. Methods and computer program products for subcontent tagging and playback
US8923684B2 (en) * 2011-05-23 2014-12-30 Cctubes, Llc Computer-implemented video captioning method and player
WO2016095072A1 (en) * 2014-12-14 2016-06-23 深圳市大疆创新科技有限公司 Video processing method, video processing device and display device
CN105307028A (en) * 2015-10-26 2016-02-03 新奥特(北京)视频技术有限公司 Video editing method and device specific to video materials of plurality of lenses
CN105338368B (en) * 2015-11-02 2019-03-15 腾讯科技(北京)有限公司 A kind of method, apparatus and system of the live stream turning point multicast data of video
CN105657537B (en) * 2015-12-23 2018-06-19 小米科技有限责任公司 Video clipping method and device
US10699747B2 (en) * 2016-07-01 2020-06-30 Yuvie Llc System and method for recording a video scene within a predetermined video framework
CN106131627B (en) * 2016-07-07 2019-03-26 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, apparatus and system
CN106791958B (en) * 2017-01-09 2020-03-03 北京小米移动软件有限公司 Position mark information generation method and device
CN106804000A (en) * 2017-02-28 2017-06-06 北京小米移动软件有限公司 Direct playing and playback method and device
CN107295416B (en) * 2017-05-05 2019-11-22 中广热点云科技有限公司 The method and apparatus for intercepting video clip
CN107390972B (en) * 2017-07-06 2021-09-07 努比亚技术有限公司 Terminal screen recording method and device and computer readable storage medium
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525746B1 (en) * 1999-08-16 2003-02-25 University Of Washington Interactive video object processing environment having zoom window
CN107608601A (en) * 2017-08-28 2018-01-19 维沃移动通信有限公司 A kind of video playback method, mobile terminal and computer-readable recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
普通高中校园多媒体网络视频直录播系统的构建;刘兴圣;《中国教育信息化》;20090220(第04期);全文 *

Also Published As

Publication number Publication date
CN110602560A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110602560B (en) Video processing method and device
CN106791893B (en) Video live broadcasting method and device
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
CN107396177B (en) Video playing method, device and storage medium
WO2021114552A1 (en) Information processing method and apparatus, electronic device and storage medium
CN108259991B (en) Video processing method and device
CN108966025B (en) Video playing method and device and computer readable storage medium
CN111866596A (en) Bullet screen publishing and displaying method and device, electronic equipment and storage medium
CN108924644B (en) Video clip extraction method and device
CN109413478B (en) Video editing method and device, electronic equipment and storage medium
CN108600818B (en) Method and device for displaying multimedia resources
CN109245997B (en) Voice message playing method and device
CN107566892B (en) Video file processing method and device and computer readable storage medium
CN107820131B (en) Comment information sharing method and device
CN109862380B (en) Video data processing method, device and server, electronic equipment and storage medium
CN110493627B (en) Multimedia content synchronization method and device
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
CN108174269B (en) Visual audio playing method and device
CN111182328B (en) Video editing method, device, server, terminal and storage medium
CN113111220A (en) Video processing method, device, equipment, server and storage medium
CN112291631A (en) Information acquisition method, device, terminal and storage medium
CN109756783B (en) Poster generation method and device
CN108521579B (en) Bullet screen information display method and device
CN108574860B (en) Multimedia resource playing method and device
CN106954093B (en) Panoramic video processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200519

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 100080 Beijing Haidian District city Haidian street A Sinosteel International Plaza No. 8 block 5 layer A, C

Applicant before: Youku network technology (Beijing) Co., Ltd

CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 554, 5 / F, building 3, 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Alibaba (China) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant