CN111510787A - Multimedia editing method, device, terminal and storage medium - Google Patents

Multimedia editing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111510787A
CN111510787A CN202010349236.1A CN202010349236A CN111510787A CN 111510787 A CN111510787 A CN 111510787A CN 202010349236 A CN202010349236 A CN 202010349236A CN 111510787 A CN111510787 A CN 111510787A
Authority
CN
China
Prior art keywords
multimedia
clipping
duration
target
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010349236.1A
Other languages
Chinese (zh)
Inventor
黄永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010349236.1A priority Critical patent/CN111510787A/en
Publication of CN111510787A publication Critical patent/CN111510787A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application belongs to the technical field of terminals, and particularly relates to a multimedia editing method, a multimedia editing device, a multimedia editing terminal and a multimedia editing storage medium. Wherein, a multimedia clipping method comprises: selecting a reference multimedia segment, and acquiring a first duration of the reference multimedia segment; and receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first time length, and generating a multimedia segment set corresponding to the target multimedia. When the multimedia segments clipped by the terminal are more, the terminal can clip the target multimedia according to the first duration of the reference multimedia segment, and the user does not need to clip the multimedia segments one by one manually, so that the user operation can be reduced, and the multimedia clipping efficiency can be improved.

Description

Multimedia editing method, device, terminal and storage medium
Technical Field
The application belongs to the technical field of terminals, and particularly relates to a multimedia editing method, a multimedia editing device, a multimedia editing terminal and a multimedia editing storage medium.
Background
The rapid development of multimedia technology is constantly enriching the lives of users. For example, the user can upload the shot multimedia to a server corresponding to the application program in the application program, so that the user can express himself and record his life.
Currently, when a user acquires multimedia and determines that information of the multimedia does not meet the requirements of the user, the multimedia can be edited. When a user clips multimedia, the start time and end time of the clip can be manually selected. When the terminal acquires the start time and the end time of the clip, the terminal can acquire the corresponding multimedia segment. When the user clips more multimedia segments, the user can select and check the starting time and the ending time of the clip for multiple times, and the multimedia segments are clipped.
Disclosure of Invention
The embodiment of the application provides a multimedia editing method, a multimedia editing device, a terminal and a storage medium, which can improve the multimedia editing efficiency. The technical scheme comprises the following steps:
In a first aspect, an embodiment of the present application provides a multimedia clipping method, where the method includes:
Selecting a reference multimedia segment, and acquiring a first duration of the reference multimedia segment;
And receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first time length, and generating a multimedia segment set corresponding to the target multimedia.
In a second aspect, an embodiment of the present application provides a multimedia clip apparatus, including:
The time length obtaining unit is used for selecting a reference multimedia fragment and obtaining a first time length of the reference multimedia fragment;
And the multimedia clipping unit is used for receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first time length and generating a multimedia segment set corresponding to the target multimedia.
In a third aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method of any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program is used for implementing any one of the methods described above when executed by a processor.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application provides a multimedia clipping method, wherein a first time length of a reference multimedia segment is obtained, a terminal clips a target multimedia according to the first time length after receiving a clipping instruction aiming at the target multimedia, a multimedia segment set corresponding to the target multimedia is generated, and batch clipping of the target multimedia is completed. When the multimedia segments clipped by the terminal are more, the terminal can clip the target multimedia according to the first duration of the reference multimedia segment, and the user does not need to clip the multimedia segments one by one manually, so that the user operation can be reduced, and the multimedia clipping efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an application scenario of a multimedia clipping method or a multimedia clipping apparatus applied to an embodiment of the present application;
FIG. 2 is a flow chart illustrating a method of multimedia editing according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an example of a terminal interface according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating a method of multimedia editing according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating a method of multimedia editing according to an embodiment of the present application;
FIG. 6 is a flow chart illustrating a method of multimedia editing according to an embodiment of the present application;
FIG. 7 is a flow chart illustrating a method of multimedia editing according to an embodiment of the present application;
Fig. 8 is a schematic diagram illustrating an example of a motion sensing operation according to an embodiment of the present application;
Fig. 9 is a schematic diagram illustrating an example of a motion sensing operation according to an embodiment of the present application;
FIG. 10 is a flow chart illustrating a method of multimedia clipping in accordance with an embodiment of the present application;
FIG. 11 is a flow chart illustrating a method of multimedia editing according to an embodiment of the present application;
FIG. 12 is a schematic diagram showing a multimedia clip apparatus according to an embodiment of the present application;
Fig. 13 shows a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The rapid development of multimedia technology is constantly enriching the lives of users. For example, a user can upload shot multimedia to a server corresponding to an application program in the application program, so that the user can express himself easily, record his life, and watch the multimedia by other users. For example, when the user goes out for a tour, the terminal can be used to shoot a video and upload the video to a server corresponding to the application program. For example, users may distribute multimedia over a carrier such as an optical disc.
Fig. 1 is a schematic view illustrating an application scenario of a network connection method or a network connection apparatus applied to an embodiment of the present application. As shown in fig. 1, when the user acquires multimedia and determines that the information of the multimedia does not meet the user requirements, the multimedia can be edited. For example, the multimedia captured by the user may be multimedia that includes only video. The duration of the multimedia shot by the user may be, for example, 10 minutes. When the user requires a video clip of 5 to 6 minutes, the photographed multimedia can be clipped. For example, the user may select the start time and the end time of the clip. When the terminal acquires the starting time and the ending time of the clipping input by the user, the terminal can clip to acquire a corresponding video segment. However, when a user needs to obtain multiple multimedia segments, the user needs to select the start time and the end time of a clip multiple times, and manually operate the multiple multimedia segments one by one to obtain the multiple multimedia segments, so that the user operation is complicated, and the problem of low multimedia clip efficiency occurs because errors occur in the start time and the end time of the clip selected each time.
It is easy to understand that when the user acquires multimedia and determines that the information of the multimedia does not meet the terminal requirements, the multimedia can be edited. For example, the multimedia captured by the user may be multimedia containing video. The duration of the multimedia shot by the user may be, for example, 30 seconds. When a user uploads the shot multimedia to a server corresponding to the application program in the application program, the terminal detects that the duration of the multimedia exceeds the preset duration, and the multimedia can be edited. At this time, the terminal may clip the multimedia based on the clip start time and the clip end time input by the user, and the terminal may clip the multimedia according to a preset duration set in advance. When the terminal clips the multimedia according to the preset duration, the terminal can acquire the multimedia segment with the preset duration. However, since the terminal needs to preset the preset duration, the multimedia clip obtained by clipping does not meet the user requirement, which results in low multimedia clipping efficiency. The embodiment of the application provides a multimedia editing method, a multimedia editing device, a terminal and a storage medium, which can improve the multimedia editing efficiency.
The multimedia clipping method provided by the embodiment of the present application will be described in detail below with reference to fig. 2 to 11. The execution bodies of the embodiments shown in fig. 2-11 may be terminals, for example.
Referring to fig. 2, a flow chart of a multimedia clipping method according to an embodiment of the present application is provided. As shown in fig. 2, the method of the embodiment of the present application may include the following steps S101 to S102.
S101, selecting a reference multimedia segment, and acquiring a first duration of the reference multimedia segment.
According to some embodiments, multimedia refers to a carrier that conveys information, including, but not limited to, video, audio, animation, combinations thereof, and the like. For example, the multimedia may be multimedia containing only video, or may be multimedia containing both video and audio.
It is easy to understand that the reference multimedia clip refers to a clipping standard when the terminal performs the multimedia clip, that is, the terminal performs the multimedia clip according to the parameters of the reference multimedia clip. The reference multimedia segment may be determined by the terminal based on a user's selected instruction for the reference multimedia segment. The selection instruction includes, but is not limited to, a voice selection instruction, a text selection instruction, a click selection instruction, and the like. For example, the terminal may obtain the reference multimedia segment according to the clip start time and the clip end time corresponding to the selected instruction, and the terminal may directly determine the multimedia segment as the reference multimedia segment according to the selected instruction.
Optionally, when the terminal selects the reference multimedia segment based on the selection instruction, the terminal may obtain the first duration of the reference multimedia segment. The first time duration refers to a time duration corresponding to the reference multimedia segment, and does not refer to a fixed time duration. For example, the first duration of the a reference multimedia clip acquired by the terminal may be 15 seconds, and the first duration of the B reference multimedia clip acquired by the terminal may be 30 seconds.
It is easy to understand that the terminal may include multimedia with a plurality of different time durations, for example, the multimedia with a plurality of unused time durations may be un-clipped multimedia, or may be clipped multimedia segments. The plurality of multimedia of different durations may include, for example, 30 minutes of Q multimedia, 3 minutes of W multimedia, 10 minutes of E multimedia, and 15 minutes of R multimedia. When the user selects the reference multimedia clip, the user may input a selection instruction for the reference multimedia clip. The selection instruction may be, for example, a voice selection instruction to select wmimedium as the reference multimedia segment. When the terminal receives the voice selection instruction, the terminal can select the W multimedia as the reference multimedia segment. When the terminal selects the W multimedia as the reference multimedia segment, the time length for the terminal to acquire the W multimedia is 3 minutes, namely the first time length for the terminal to acquire the reference multimedia segment is 3 minutes.
According to some embodiments, when the user needs to select a reference multimedia clip, the user may input a selection instruction for the reference multimedia clip. The selection instruction may be, for example, a click selection instruction carrying a clip start time and an end time. When the terminal receives the click selection instruction, the terminal can select the multimedia segment from the clip start time to the clip end time as the reference multimedia segment. When the terminal selects the reference multimedia segment, the terminal may acquire the first duration of the reference multimedia segment based on the clip start time and the clip end time. The clip start time carried by the click-selected instruction may be, for example, the 15 th second and the end time of the multimedia playing sequence may be, for example, the 30 th second of the multimedia playing sequence, and at this time, the first duration of the reference multimedia segment that the terminal can acquire is 15 seconds. At this time, an example schematic diagram of the terminal interface may be as shown in fig. 3.
S102, receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first duration, and generating a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the target multimedia refers to multimedia to be clipped. The target multimedia may comprise only one multimedia or may be a set comprising a plurality of multimedia. For example, the target multimedia may be Q multimedia having a duration of 30 minutes, and may be a set including Q multimedia having a duration of 30 minutes, W multimedia having a duration of 3 minutes, E multimedia having a duration of 10 minutes, and R multimedia having a duration of 15 minutes.
It is easy to understand that when the terminal selects the reference multimedia segment and obtains the first time length of the reference multimedia segment, the terminal may receive a clipping instruction for the target multimedia. The clipping instructions include, but are not limited to, somatosensory operations, voice clipping instructions, click clipping instructions, and the like. The clipping instruction received by the terminal for the target multimedia may be, for example, a click clipping instruction. When the terminal detects that the user clicks the clip control, the terminal may receive a clip instruction for the target multimedia.
Optionally, when the terminal receives a clipping instruction for the target multimedia, the terminal may clip the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia. For example, the target multimedia determined by the terminal may be, for example, 30 minutes of Q multimedia, and the first duration for the terminal to acquire the reference multimedia segment may be, for example, 3 minutes. The clipping instruction for Q multimedia received by the terminal may be a somatosensory operation, for example. The motion sensing operation may be, for example, a shake operation. When the terminal receives the shaking operation aiming at the Q multimedia, the terminal can clip the Q multimedia according to the first time length of the reference multimedia segment of 3 minutes, and a multimedia segment set corresponding to the Q multimedia is generated. The set of multimedia segments may include, for example, 10 multimedia segments with a duration of 3 minutes, and the multimedia segments may be, for example, a Q1 multimedia segment, a Q2 multimedia segment, a Q3 multimedia segment, a Q4 multimedia segment, a Q5 multimedia segment, a Q6 multimedia segment, a Q7 multimedia segment, a Q8 multimedia segment, a Q9 multimedia segment, and a Q10 multimedia segment, respectively.
According to some embodiments, when the terminal selects the reference multimedia clip, the terminal may also obtain other information of the reference multimedia clip. Other information includes, but is not limited to, the file size corresponding to the reference multimedia clip. When the terminal receives a clipping instruction for the target multimedia, the terminal can clip the target multimedia according to the file size corresponding to the reference multimedia segment to generate a multimedia segment set corresponding to the target multimedia. The multimedia segment set comprises at least one multimedia segment, and the file sizes of the at least one multimedia segment are the same.
The embodiment of the application provides a multimedia clipping method, wherein a first time length of a reference multimedia segment is obtained, and when a clipping instruction for a target multimedia is received, a terminal can clip the target multimedia according to the first time length to generate a multimedia segment set corresponding to the target multimedia, so that batch clipping of the target multimedia can be completed. When the multimedia segments clipped by the terminal are more, the terminal can clip the target multimedia according to the first duration of the reference multimedia segment, the user does not need to clip the multimedia segments one by one manually, and the user does not need to frequently select and check the clipping duration, so that the clipping efficiency of the multimedia can be improved.
Referring to fig. 4, a flowchart of a multimedia clipping method is provided according to an embodiment of the present application. As shown in fig. 4, the method of the embodiment of the present application may include the following steps S201 to S205.
S201, receiving a clipping instruction aiming at the sample multimedia, and generating a reference multimedia segment.
According to some embodiments, sample multimedia refers to multimedia that the terminal can clip to get a reference multimedia clip. The sample multimedia includes, but is not limited to, multimedia stored in the terminal, resulting multimedia taken upon receiving a photographing instruction, multimedia acquired from a server, and the like. The terminal receives a clipping instruction for the sample multimedia including, but not limited to, a text clipping instruction, a voice clipping instruction, a click clipping instruction, and the like.
It is easily understood that when the terminal receives a clipping instruction for the sample multimedia, the terminal may generate a reference multimedia clip in response to the clipping instruction. The clipping instruction received by the terminal may be, for example, a voice clipping instruction, which may carry, for example, clipping information corresponding to the reference multimedia segment, which may be, for example, a clipping start time and a clipping end time. When the terminal receives the clipping instruction, the terminal may generate the reference multimedia clip based on the clipping instruction.
According to some embodiments, please refer to fig. 5, which provides a flowchart of a multimedia clipping method according to an embodiment of the present application. As shown in fig. 5, the method of the embodiment of the present application may include the following steps S301 to S302. S301, receiving a clipping instruction aiming at sample multimedia, and generating a sample multimedia fragment; s302, receiving an adjusting instruction aiming at the sample multimedia segment, and generating a reference multimedia segment.
According to some embodiments, the terminal may generate the sample multimedia fragments when the terminal receives a clipping instruction for the sample multimedia. When the terminal generates a sample multimedia clip, if an adjustment instruction for the multimedia clip is not received, the terminal may determine the sample multimedia clip as a reference multimedia clip. For example, the terminal does not receive an adjustment instruction for the sample multimedia clip before receiving the clipping instruction for the target multimedia, and the terminal may determine the sample multimedia clip as the reference multimedia clip.
It is easily understood that, when the terminal receives the adjustment instruction for the sample multimedia clip, the terminal may generate the reference multimedia clip based on the adjustment instruction. For example, when the terminal receives a clipping instruction for the sample multimedia and generates a 32-second sample multimedia segment, the terminal may receive an adjustment instruction for the sample multimedia segment, for example, the adjustment instruction may be to adjust the 32-second sample multimedia segment to a 30-second sample multimedia segment. When the terminal receives the adjustment instruction, the terminal may generate a 30-second sample multimedia fragment, where the 30-second sample multimedia fragment is the reference multimedia fragment.
Alternatively, when the terminal generates a sample multimedia clip, the terminal may issue a prompt message, which may be, for example, "whether to determine the sample multimedia clip as a reference multimedia clip". When the terminal receives the determination information for the cue information, the terminal may determine the sample multimedia clip as the reference multimedia clip. When the terminal receives an adjustment instruction for the sample multimedia clip, the terminal may adjust the sample multimedia clip to generate the reference multimedia clip based on the adjustment instruction.
S202, selecting the reference multimedia segment, and obtaining the first duration of the reference multimedia segment.
The specific process is as described above, and is not described herein again.
S203, receiving a clipping instruction aiming at the target multimedia, and acquiring a second duration of the target multimedia.
According to some embodiments, when the terminal acquires the first time length of the reference multimedia segment, the terminal may receive a clipping instruction for the target multimedia. When the terminal receives the clipping instruction for the target multimedia, the terminal may acquire the second duration of the target multimedia. The terminal may acquire the second duration of the target multimedia by using a duration determination algorithm, for example, and the terminal may also acquire the second duration of the target multimedia by using a text recognition algorithm, for example. When the terminal receives the voice clip instruction for the target multimedia, the second duration of the target multimedia acquired by the terminal may be, for example, 60 seconds.
And S204, when the second duration is longer than the first duration, clipping the target multimedia according to the first duration.
According to some embodiments, when the terminal acquires the second duration of the target multimedia, the terminal may detect whether the second duration of the target multimedia is greater than the first duration of the reference multimedia segment. When the terminal detects that the second duration is longer than the first duration, the terminal can clip the target multimedia according to the first duration. The second duration of the target multimedia acquired by the terminal may be, for example, 60 seconds, and the first duration of the reference multimedia segment acquired by the terminal may be, for example, 30 seconds. When the terminal detects that the second duration 60 seconds is greater than the first duration 30 seconds, the terminal may clip the target multimedia according to the first duration, generating two target multimedia segments of 30 seconds.
Referring to fig. 6, a flow chart of a multimedia clipping method according to some embodiments of the present application is provided. As shown in fig. 6, the method of the embodiment of the present application may include the following steps S401 to S402. S401, clipping the target multimedia according to the first time length and the playing sequence of the target multimedia; s402, when the remaining duration of the clipped target multimedia is less than or equal to the first duration, the clipping is finished.
According to some embodiments, when the terminal detects that the second duration is longer than the first duration and the terminal clips the target multimedia according to the first duration, the terminal may clip the target multimedia according to the first duration and the playing order of the target multimedia. The playing order may be, for example, the playing order of the target multimedia clip from beginning to end. For example, the second duration for the terminal to acquire the target multimedia may be, for example, 60 seconds, and the first duration for the terminal to acquire the reference multimedia segment may be, for example, 30 seconds. When the terminal detects that the second duration is longer than the first duration by 30 seconds, the terminal can clip the target multimedia according to the first duration and the playing sequence of the target multimedia segment from beginning to end to generate two target multimedia segments of 30 seconds.
It is easy to understand that when the terminal clips the target multimedia once, the terminal can acquire the remaining time period of the clipped target multimedia. When the terminal acquires the remaining duration of the clipped target multimedia, the terminal may detect whether the remaining duration of the clipped target multimedia is greater than the first duration. And when the terminal detects that the residual duration of the clipped target multimedia is longer than the first duration, the terminal continues clipping the target multimedia according to the first duration and the playing sequence of the target multimedia. And finishing the clipping when the remaining duration of the target multimedia after the clipping by the terminal is less than or equal to the first duration.
Optionally, for example, the second duration of the target multimedia acquired by the terminal may be 65 seconds, and the first duration of the reference multimedia segment acquired by the terminal may be 30 seconds, for example. When the terminal clips the target multimedia once and the remaining duration of the clipped target multimedia is 35 seconds, the terminal detects that the remaining duration of the clipped target multimedia is 35 seconds longer than the first duration of 30 seconds, and the terminal continues to clip the target multimedia. And when the terminal clips the target multimedia twice and the obtained residual time length of the clipped target multimedia is 5 seconds, the terminal detects that the residual time length of the clipped target multimedia is 5 seconds less than the first time length for 30 seconds, and the clipping is finished.
And S205, ending the clipping when the second duration is less than or equal to the first duration.
According to some embodiments, when the terminal acquires the second duration of the target multimedia, the terminal may detect whether the second duration of the target multimedia is greater than the first duration of the reference multimedia segment. When the terminal detects that the second duration is less than or equal to the first duration, the terminal may end the clipping. The second duration for the terminal to acquire the target multimedia may be 20 seconds, for example, and the first duration for the terminal to acquire the reference multimedia segment may be 30 seconds, for example. When the terminal detects that the second duration 20 seconds is less than the first duration 30 seconds, the terminal may end the clipping of the target multimedia segment.
Referring to fig. 7, a flowchart of a multimedia clipping method according to some embodiments of the present application is provided. As shown in fig. 7, the method of the embodiment of the present application may include the following steps S501 to S502. S501, receiving somatosensory operation aiming at a target multimedia; s502, clipping the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the somatosensory operation refers to an operation that can trigger the terminal to clip the target multimedia. The somatosensory operation includes, but is not limited to, a shaking operation, a moving operation, a long-press display operation, a gesture operation, and the like. The gesture operation can be gesture operation received by a display screen of the terminal, and can also be gesture operation acquired by a camera of the terminal. The terminal can preset somatosensory operation, for example.
It is easy to understand that when the terminal receives the somatosensory operation for the target multimedia, the terminal can detect whether the received somatosensory operation is consistent with the preset somatosensory operation. When the terminal detects that the received somatosensory operation is consistent with the preset somatosensory operation, the terminal can clip the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
According to some embodiments, fig. 8 shows an example schematic diagram of a somatosensory operation of an embodiment of the application. As shown in fig. 8, the preset body sensing operation preset by the terminal may be, for example, a shaking operation. When the terminal detects the shaking operation input by the user and aiming at the target multimedia, the terminal can detect whether the received shaking operation is consistent with the preset somatosensory operation. The shaking operation may be, for example, shaking the terminal. When the terminal detects that the received shaking operation is consistent with the preset somatosensory operation, the terminal can clip the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
According to some embodiments, fig. 9 shows an example schematic diagram of a somatosensory operation of an embodiment of the application. As shown in fig. 9, the preset somatosensory operation preset by the terminal may be, for example, a mobile operation. The moving operation may be, for example, that the terminal is to move according to a preset trajectory. When the terminal detects the mobile operation, input by the user, for the target multimedia, the terminal can detect whether the received mobile operation is consistent with the preset somatosensory operation. When the terminal detects that the movement track of the terminal is consistent with the preset movement track, the terminal can clip the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
Referring to fig. 10, a flow chart of a multimedia clipping method according to some embodiments of the present application is provided. As shown in fig. 10, the method of the embodiment of the present application may include the following steps S601 to S602. S601, acquiring somatosensory parameters corresponding to somatosensory operation; and S602, when the somatosensory parameters are matched with the preset parameters, clipping the target multimedia according to the first duration, and generating a multimedia segment set corresponding to the target multimedia.
According to some embodiments, when the terminal receives the somatosensory operation on the target multimedia, the terminal can acquire the somatosensory parameters corresponding to the somatosensory operation. When the terminal obtains the somatosensory parameters corresponding to the somatosensory operation, the terminal can detect whether the somatosensory parameters corresponding to the somatosensory operation are matched with preset parameters. When the terminal detects that the somatosensory parameters corresponding to the somatosensory operation are matched with the preset parameters, the terminal can clip the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia. The terminal detects the somatosensory parameters corresponding to the somatosensory operation, so that misoperation of the terminal can be reduced, and the multimedia editing efficiency is improved.
It will be readily appreciated that the preset parameter of the terminal setting may be, for example, a shake angle greater than 30 °. When the terminal detects the shaking operation input by the user and aiming at the target multimedia, the terminal can acquire the shaking parameters corresponding to the shaking operation. The shake parameter acquired by the terminal may be, for example, a shake angle of 50 °. When the terminal detects that the shaking parameter corresponding to the shaking operation is matched with the preset parameter, the terminal can clip the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
Referring to fig. 11, a flow chart of a multimedia clipping method according to some embodiments of the present application is provided. As shown in fig. 11, the method of the embodiment of the present application may include the following steps S701 to S702. S701, the target multimedia comprises a plurality of sub-media, and a clipping instruction aiming at the target multimedia is received; s702, clipping the plurality of sub-media according to the first time length to generate a multimedia segment set corresponding to each sub-media.
According to some embodiments, the target multimedia may include a plurality of sub-media, the time durations of the plurality of sub-media may be the same, and the time durations of the plurality of sub-media may be different. For example, the target multimedia may include 4 sub-media each having a duration of 50 seconds, and the target multimedia may further include 45 second D sub-media, 40 second F sub-media, 15 second B sub-media, and 55 second V sub-media, for example.
It is easy to understand that when the terminal receives a clipping instruction for the target multimedia, the terminal may clip a plurality of sub-media respectively according to the first time length, and generate a multimedia segment set corresponding to each sub-media. The first duration for the terminal to acquire the reference multimedia segment may be, for example, 15 seconds. When the terminal receives a clipping instruction for the target multimedia, the terminal can clip a plurality of sub media respectively according to the first time length. At this time, the terminal may generate a D sub media set including 3 multimedia clips, an F sub media set including 3 multimedia clips, a B sub media set including 1 multimedia clip, and a V sub media set including 3 multimedia clips.
The embodiment of the application provides a multimedia clipping method, wherein a reference multimedia segment can be generated by receiving a clipping instruction for sample multimedia, and a terminal can clip a target multimedia based on a first duration of the reference multimedia segment, so that a user only needs to clip the reference multimedia segment manually, and does not need to clip the multimedia segments one by one manually, and the efficiency of clipping multimedia with the same duration by the user can be improved. In addition, the terminal clips the target multimedia based on the first duration of the reference multimedia segment, so that the condition that the multimedia segment obtained by clipping based on the preset duration by the terminal does not meet the user requirement can be reduced, and the clipping efficiency of the multimedia segment can be improved. Secondly, the terminal receives a clipping instruction aiming at the target multimedia, can acquire a second time length of the target multimedia, and clips the target multimedia according to the first time length when the second time length is longer than the first time length, so that the error of manually clipping the multimedia fragment by the user can be reduced, the repeated manual operation of the user can be reduced, the clipping time of the multimedia can be reduced, and the clipping efficiency of the multimedia can be improved.
The multimedia clipping apparatus provided by the embodiment of the present application will be described in detail with reference to fig. 12. It should be noted that the multimedia clipping apparatus shown in fig. 12 is used for executing the method of the embodiment shown in fig. 2 to 11 of the present application, and for convenience of description, only the portion related to the embodiment of the present application is shown, and details of the technology are not disclosed, please refer to the embodiment shown in fig. 2 to 11 of the present application.
Please refer to fig. 12, which shows a schematic structural diagram of a multimedia clip apparatus according to an embodiment of the present application. The multimedia clip apparatus 1200 may be implemented as all or a part of a user terminal through software, hardware, or a combination of both. According to some embodiments, the multimedia clip apparatus 1200 includes a duration obtaining unit 1201 and a multimedia clip unit 1202, and is specifically configured to:
A duration obtaining unit 1201, configured to select a reference multimedia segment, and obtain a first duration of the reference multimedia segment;
The multimedia clipping unit 1202 is configured to receive a clipping instruction for a target multimedia, clip the target multimedia according to a first duration, and generate a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the multimedia clipping apparatus 1200 further comprises a segment generating unit 1203 configured to receive a clipping instruction for the sample multimedia before the reference multimedia segment is selected, and generate the reference multimedia segment.
According to some embodiments, the segment generating unit 1203 is configured to receive a clipping instruction for the sample multimedia, and when generating the reference multimedia segment, specifically configured to:
Receiving a clipping instruction aiming at sample multimedia, and generating a sample multimedia segment;
And receiving an adjusting instruction aiming at the sample multimedia segment, and generating a reference multimedia segment.
According to some embodiments, the multimedia clipping unit 1202 is configured to receive a clipping instruction for a target multimedia, and when clipping the target multimedia according to a first duration, specifically:
Receiving a clipping instruction aiming at the target multimedia, and acquiring a second duration of the target multimedia;
And when the second duration is longer than the first duration, clipping the target multimedia according to the first duration.
According to some embodiments, the multimedia clipping unit 1202, when clipping the target multimedia according to the first duration when the second duration is longer than the first duration, is specifically configured to:
Editing the target multimedia according to the first time length and the playing sequence of the target multimedia;
And ending the clipping when the remaining duration of the clipped target multimedia is less than or equal to the first duration.
According to some embodiments, the multimedia clipping apparatus 1200 comprises a clip ending unit 1204 for ending the clip when the second duration is less than or equal to the first duration, according to some embodiments.
According to some embodiments, the multimedia clipping unit 1202 is configured to receive a clipping instruction for a target multimedia, clip the target multimedia according to a first duration, and when a multimedia segment set corresponding to the target multimedia is generated, specifically configured to:
And receiving somatosensory operation aiming at the target multimedia, editing the target multimedia according to the first time length, and generating a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the multimedia clipping unit 1202 is configured to clip the target multimedia according to the first duration, and when generating the set of multimedia fragments corresponding to the target multimedia, specifically configured to:
Acquiring somatosensory parameters corresponding to the somatosensory operation;
And when the somatosensory parameters are matched with the preset parameters, clipping the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the target multimedia includes a plurality of sub-media, and the multimedia clipping unit 1202 is configured to receive a clipping instruction for the target multimedia, and when clipping the target multimedia according to the first duration, is specifically configured to:
And receiving a clipping instruction aiming at the target multimedia, clipping the plurality of sub-media according to the first time length, and generating a multimedia segment set corresponding to each sub-media.
The embodiment of the application provides a multimedia editing device, wherein a reference multimedia segment is selected by a duration obtaining unit, a first duration of the reference multimedia segment is obtained, the multimedia editing unit can receive an editing instruction for a target multimedia, the target multimedia is edited according to the first duration, a multimedia segment set corresponding to the target multimedia is generated, and batch editing of the target multimedia can be completed. When the multimedia clip device clips a plurality of multimedia clips, the multimedia clip device can clip the target multimedia according to the first duration of the reference multimedia clip, and the user does not need to clip the multimedia clips one by one manually, so that the repeated manual operation of the user can be reduced, the multimedia clipping time can be reduced, and the multimedia clipping efficiency can be improved.
Please refer to fig. 13, which is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 13, the terminal 1300 may include: at least one processor 1301, at least one network interface 1304, a user interface 1303, memory 1305, at least one communication bus 1302.
Wherein a communication bus 1302 is used to enable connective communication between these components.
The user interface 1303 may include a Display screen (Display) and a GPS, and the optional user interface 1303 may also include a standard wired interface and a wireless interface.
The network interface 1304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
the processor 1301 connects various parts throughout the terminal 1300 using various interfaces and lines, executes various functions of the terminal 1300 and processes data by running or executing instructions, programs, code sets or instruction sets stored in the memory 1305, and calling data stored in the memory 1305. optionally, the processor 1301 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field Programmable Gate Array (FPGA), Programmable Logic Array (PLA), Central Processing Unit (CPU), Graphics Processing Unit (GPU), and/or modem, among others, wherein the CPU is primarily responsible for Processing operating systems, user interfaces and application programs, etc., the modem for displaying desired contents of a display screen and rendering the contents may be implemented in the wireless Processing chip 1301, or may be implemented separately from the modem, or may be implemented in a separate chip.
The Memory 1305 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1305 includes a non-transitory computer-readable medium. The memory 1305 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1305 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1305 may optionally be at least one memory device located remotely from the processor 1301. As shown in fig. 13, the memory 1305, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an application program for multimedia clip.
In the terminal 1300 shown in fig. 13, the user interface 1303 is mainly used for providing an input interface for a user to obtain data input by the user; and the processor 1301 may be adapted to invoke an application of the multimedia clip stored in the memory 1305 and specifically perform the following operations:
Selecting a reference multimedia segment, and acquiring a first duration of the reference multimedia segment;
And receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first duration, and generating a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the processor 1301 is further specifically configured to, before selecting the reference multimedia segment, perform the following steps:
A clipping instruction for the sample multimedia is received, generating a reference multimedia segment.
According to some embodiments, the processor 1301 is configured to receive a clipping instruction for the sample multimedia, and when generating the reference multimedia segment, to perform the following steps:
Receiving a clipping instruction aiming at sample multimedia, and generating a sample multimedia segment;
And receiving an adjusting instruction aiming at the sample multimedia segment, and generating a reference multimedia segment.
According to some embodiments, the processor 1301 is configured to receive a clipping instruction for a target multimedia, and when the target multimedia is clipped according to the first duration, to specifically perform the following steps:
Receiving a clipping instruction aiming at the target multimedia, and acquiring a second duration of the target multimedia;
And when the second duration is longer than the first duration, clipping the target multimedia according to the first duration.
According to some embodiments, the processor 1301 is configured to, when the second duration is longer than the first duration and the target multimedia is clipped according to the first duration, specifically, perform the following steps:
Editing the target multimedia according to the first time length and the playing sequence of the target multimedia;
And ending the clipping when the remaining duration of the clipped target multimedia is less than or equal to the first duration.
According to some embodiments, processor 1301 is further specifically configured to perform the following steps:
And ending the clipping when the second duration is less than or equal to the first duration.
According to some embodiments, the processor 1301 is configured to receive a clipping instruction for a target multimedia, clip the target multimedia according to a first duration, and when a multimedia fragment set corresponding to the target multimedia is generated, specifically configured to execute the following steps:
And receiving somatosensory operation aiming at the target multimedia, editing the target multimedia according to the first time length, and generating a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the processor 1301 is configured to clip the target multimedia according to the first duration, and when generating the multimedia fragment set corresponding to the target multimedia, specifically configured to execute the following steps:
Acquiring somatosensory parameters corresponding to the somatosensory operation;
And when the somatosensory parameters are matched with the preset parameters, clipping the target multimedia according to the first duration to generate a multimedia segment set corresponding to the target multimedia.
According to some embodiments, the target multimedia includes a plurality of sub-media, and the processor 1301 is configured to receive a clipping instruction for the target multimedia, and when clipping the target multimedia according to the first duration, is specifically configured to perform the following steps:
And receiving a clipping instruction aiming at the target multimedia, clipping the plurality of sub-media according to the first time length, and generating a multimedia segment set corresponding to each sub-media.
The embodiment of the application provides a terminal, wherein a first time length of a reference multimedia segment is obtained, when a clipping instruction for a target multimedia is received, the terminal clips the target multimedia according to the first time length to generate a multimedia segment set corresponding to the target multimedia, and then batch clipping of the target multimedia can be completed. When the terminal clips the target multimedia, the terminal can clip the target multimedia according to the first duration of the reference multimedia segment, the user does not need to clip the multimedia segments one by one manually, errors of manual clipping of the user can be reduced, the time for the user to frequently check the clipping duration is reduced, the clipping time of the multimedia is reduced, and the clipping efficiency of the multimedia can be improved.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described method. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the multimedia clipping methods as described in the above method embodiments.
"Unit" and "module" in this specification refer to software and/or hardware that can perform a particular function, either independently or in conjunction with other components, such as a Field Programmable Gate Array (FPGA), Integrated Circuit (IC), etc.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A method of multimedia clipping, the method comprising:
Selecting a reference multimedia segment, and acquiring a first duration of the reference multimedia segment;
And receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first time length, and generating a multimedia segment set corresponding to the target multimedia.
2. The method of claim 1, wherein the selecting the reference multimedia segment is preceded by:
A clipping instruction for the sample multimedia is received, generating a reference multimedia segment.
3. The method of claim 2, wherein receiving a clipping instruction for the sample multimedia, generating a reference multimedia segment comprises:
Receiving a clipping instruction aiming at sample multimedia, and generating a sample multimedia segment;
And receiving an adjusting instruction aiming at the sample multimedia segment, and generating a reference multimedia segment.
4. The method of claim 1, wherein receiving a clipping instruction for a target multimedia, clipping the target multimedia according to the first time duration comprises:
Receiving a clipping instruction aiming at a target multimedia, and acquiring a second duration of the target multimedia;
And when the second time length is longer than the first time length, clipping the target multimedia according to the first time length.
5. The method of claim 4, wherein clipping the target multimedia according to the first duration when the second duration is longer than the first duration comprises:
Editing the target multimedia according to the first time length and the playing sequence of the target multimedia;
And ending the clipping when the remaining duration of the clipped target multimedia is less than or equal to the first duration.
6. The method of claim 4, further comprising:
And ending the clipping when the second duration is less than or equal to the first duration.
7. The method of claim 1, wherein the receiving a clipping instruction for a target multimedia, clipping the target multimedia according to the first time length, and generating a set of multimedia segments corresponding to the target multimedia comprises:
And receiving somatosensory operation aiming at the target multimedia, clipping the target multimedia according to the first time length, and generating a multimedia segment set corresponding to the target multimedia.
8. The method of claim 7, wherein said clipping the target multimedia according to the first time length to generate a set of multimedia segments corresponding to the target multimedia comprises:
Acquiring somatosensory parameters corresponding to the somatosensory operation;
And when the somatosensory parameters are matched with preset parameters, clipping the target multimedia according to the first time length to generate a multimedia segment set corresponding to the target multimedia.
9. The method according to any one of claims 1-8, wherein the target multimedia includes a plurality of sub-media, and wherein receiving a clipping instruction for the target multimedia, clipping the target multimedia according to the first time duration, comprises:
Receiving a clipping instruction aiming at the target multimedia, clipping the plurality of sub-media according to the first time length, and generating a multimedia segment set corresponding to each sub-media.
10. A multimedia clipping apparatus, comprising:
The time length obtaining unit is used for selecting a reference multimedia fragment and obtaining a first time length of the reference multimedia fragment;
And the multimedia clipping unit is used for receiving a clipping instruction aiming at the target multimedia, clipping the target multimedia according to the first time length and generating a multimedia segment set corresponding to the target multimedia.
11. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of the preceding claims 1 to 9.
CN202010349236.1A 2020-04-28 2020-04-28 Multimedia editing method, device, terminal and storage medium Pending CN111510787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010349236.1A CN111510787A (en) 2020-04-28 2020-04-28 Multimedia editing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010349236.1A CN111510787A (en) 2020-04-28 2020-04-28 Multimedia editing method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN111510787A true CN111510787A (en) 2020-08-07

Family

ID=71864353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010349236.1A Pending CN111510787A (en) 2020-04-28 2020-04-28 Multimedia editing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111510787A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110246937A1 (en) * 2010-03-31 2011-10-06 Verizon Patent And Licensing, Inc. Enhanced media content tagging systems and methods
CN104581378A (en) * 2013-10-16 2015-04-29 三星电子株式会社 Apparatus and method for editing synchronous media
CN105657537A (en) * 2015-12-23 2016-06-08 小米科技有限责任公司 Video editing method and device
CN106899809A (en) * 2017-02-28 2017-06-27 广州市诚毅科技软件开发有限公司 A kind of video clipping method and device based on deep learning
CN107615766A (en) * 2015-04-16 2018-01-19 维斯克体育科技有限公司 System and method for creating and distributing content of multimedia
CN108024073A (en) * 2017-11-30 2018-05-11 广州市百果园信息技术有限公司 Video editing method, device and intelligent mobile terminal
CN108391063A (en) * 2018-02-11 2018-08-10 北京秀眼科技有限公司 Video clipping method and device
CN108900905A (en) * 2018-08-08 2018-11-27 北京未来媒体科技股份有限公司 A kind of video clipping method and device
CN109151162A (en) * 2018-06-27 2019-01-04 努比亚技术有限公司 A kind of multi-panel screen interaction control method, equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110246937A1 (en) * 2010-03-31 2011-10-06 Verizon Patent And Licensing, Inc. Enhanced media content tagging systems and methods
CN104581378A (en) * 2013-10-16 2015-04-29 三星电子株式会社 Apparatus and method for editing synchronous media
CN107615766A (en) * 2015-04-16 2018-01-19 维斯克体育科技有限公司 System and method for creating and distributing content of multimedia
CN105657537A (en) * 2015-12-23 2016-06-08 小米科技有限责任公司 Video editing method and device
CN106899809A (en) * 2017-02-28 2017-06-27 广州市诚毅科技软件开发有限公司 A kind of video clipping method and device based on deep learning
CN108024073A (en) * 2017-11-30 2018-05-11 广州市百果园信息技术有限公司 Video editing method, device and intelligent mobile terminal
CN108391063A (en) * 2018-02-11 2018-08-10 北京秀眼科技有限公司 Video clipping method and device
CN109151162A (en) * 2018-06-27 2019-01-04 努比亚技术有限公司 A kind of multi-panel screen interaction control method, equipment and computer readable storage medium
CN108900905A (en) * 2018-08-08 2018-11-27 北京未来媒体科技股份有限公司 A kind of video clipping method and device

Similar Documents

Publication Publication Date Title
CN108628652B (en) User interface rendering method and device and terminal
US11632576B2 (en) Live video broadcast method, live broadcast device and storage medium
US20180255359A1 (en) Method for sharing a captured video clip and electronic device
CN106998494B (en) Video recording method and related device
US11025967B2 (en) Method for inserting information push into live video streaming, server, and terminal
CN104869305B (en) Method and apparatus for processing image data
US11323750B2 (en) Video system and video processing method, device and computer readable medium
CN107734353B (en) Method and device for recording barrage video, readable storage medium and equipment
US20170195387A1 (en) Method and Electronic Device for Increasing Start Play Speed
EP2667629B1 (en) Method and apparatus for multi-playing videos
CN109379548B (en) Multimedia recording method, device, terminal and storage medium
CN109168012B (en) Information processing method and device for terminal equipment
US11893054B2 (en) Multimedia information processing method, apparatus, electronic device, and medium
US10468029B2 (en) Communication terminal, communication method, and computer program product
CN108966315B (en) Wireless network acquisition method and device and electronic equipment
US10133408B2 (en) Method, system and computer program product
JP2016526246A (en) User data update method, apparatus, program, and recording medium
CN104754222A (en) Terminal camera shooting method and terminal
CN111510787A (en) Multimedia editing method, device, terminal and storage medium
CN115941869A (en) Audio processing method and device and electronic equipment
CN113038218B (en) Video screenshot method, device, equipment and readable storage medium
CN111954041A (en) Video loading method, computer equipment and readable storage medium
JP2023016858A (en) Communication system, communication device, and program
CN112911337B (en) Method and device for configuring video cover pictures of terminal equipment
CN113157178B (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200807