CN110703976B - Clipping method, electronic device, and computer-readable storage medium - Google Patents

Clipping method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN110703976B
CN110703976B CN201910800267.1A CN201910800267A CN110703976B CN 110703976 B CN110703976 B CN 110703976B CN 201910800267 A CN201910800267 A CN 201910800267A CN 110703976 B CN110703976 B CN 110703976B
Authority
CN
China
Prior art keywords
video
terminal
clipping
pressing pressure
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910800267.1A
Other languages
Chinese (zh)
Other versions
CN110703976A (en
Inventor
张健
钟宜峰
马晓琳
莫东松
张进
赵璐
马丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Culture Technology Co Ltd
Original Assignee
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Culture Technology Co Ltd filed Critical MIGU Culture Technology Co Ltd
Priority to CN201910800267.1A priority Critical patent/CN110703976B/en
Publication of CN110703976A publication Critical patent/CN110703976A/en
Application granted granted Critical
Publication of CN110703976B publication Critical patent/CN110703976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention relates to the technical field of communication, and discloses a clipping method, electronic equipment and a computer-readable storage medium. In the present invention, the clipping method includes: acquiring a time period of the terminal which is continuously triggered; wherein the terminal plays the video to be edited; determining a video segment of the video to be clipped played in the time period; and editing the determined video segments to obtain the edited video, so that the convenience of editing can be improved, and the editing speed can be increased.

Description

Clipping method, electronic device, and computer-readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a clipping method, electronic equipment and a computer-readable storage medium.
Background
At present, if the video needs to be edited to obtain a video segment meeting the requirement, the editing is usually implemented based on manual work and editing software. Wherein the points in time at which the clipping starts and ends are determined manually and the clipping software clips the video based on the manually determined points in time.
However, the inventors found that at least the following problems exist in the related art: the current way to manually determine the clip time point is: the progress bar is dragged back and forth manually when the video is played, the time of starting and ending the clip is repeatedly confirmed, time is consumed, operation is troublesome, and user experience is poor.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a clipping method, an electronic device, and a computer-readable storage medium, which can improve the convenience of clipping and increase the speed of clipping.
To solve the above technical problem, an embodiment of the present invention provides a clipping method, including: acquiring a time period of the terminal which is continuously triggered; wherein the terminal plays the video to be edited; determining a video segment of the video to be clipped played in the time period; and clipping the determined video segments to obtain clipped videos.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the clipping method described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the clipping method described above.
Compared with the prior art, the method and the device for triggering the terminal continuously acquire the time period of the terminal being triggered continuously; wherein, the terminal plays the video to be edited; determining a video segment of the video to be clipped played in the time period; and clipping the determined video segments to obtain clipped videos. That is to say, when the clip is needed, the terminal may be continuously triggered, and the video segment played by the terminal in the time period that the terminal is continuously triggered may be regarded as the determined segment that needs to be clipped, so as to clip the determined segment that needs to be clipped, which is convenient for the user to clip the segment that needs to be clipped at any time in the process of watching the video, and improves the convenience of clipping. Moreover, the segment to be clipped is determined according to the time period of continuous triggering of the terminal, manual attention is not needed, the progress bar of video playing is dragged back and forth, the starting time point and the ending time point of the clip are determined more easily and rapidly, and therefore the speed of clipping is increased.
In addition, the video to be edited is: and (4) online video. In the related art, online clipping is difficult to realize, and generally, online video needs to be downloaded locally for offline clipping, which is inconvenient. In the embodiment of the invention, a user can directly pass through the continuous trigger terminal in the process of watching the online video, so that the editing of the video segment played in the time period of the continuous trigger terminal is automatically finished, the online video does not need to be downloaded, and the convenience of the editing is further improved.
In addition, the time period for the acquisition terminal to be continuously triggered specifically includes: acquiring a time period for which a screen of the terminal is continuously pressed; the clipping the determined video segment comprises: acquiring a starting video frame of the video clip; identifying an object in the starting video frame that is located within the area where the screen is pressed; and clipping each video frame containing the object in the video clip. That is to say, the continuous trigger to the terminal may be embodied as continuous pressing of the screen of the terminal, and at the moment when the screen is just pressed, the object in the pressed area in the video is taken as the object that is interested by the user, that is, the target object, and the video frames in the clipped video segments are all video frames containing the target object, which is beneficial to conveniently and accurately clipping the video segments containing the object that is interested by the user.
In addition, before the clipping is performed on each video frame containing the object in the video segment, the method further includes: if the number of the objects in the area, in which the screen is pressed, in the starting video frame is identified to be multiple, respectively acquiring the area occupied by each object in the starting video frame; taking the object with the minimum area of the region as a target object; the clipping of each video frame containing the object in the video segment specifically includes: and clipping each video frame containing the target object in the video clip. Considering the possibility of overlapping objects, namely, a plurality of objects in the pressed area are provided, and the object with the smallest area is most likely to be the object really interested by the user, the object with the smallest area is taken as the target object, and each video frame containing the target object is clipped, so that the video really interested by the user is more favorably clipped.
In addition, if the screen of the terminal is detected to be pressed, the pressing pressure when the screen is continuously pressed is acquired; acquiring a playing speed corresponding to the pressing pressure according to the pressing pressure; and playing the video to be edited at the acquired playing speed. That is to say, based on the pressure that the screen is pressed, the video playing speed in the process of clipping the video is controlled, which is beneficial to controlling the clipping speed according to the actual requirement.
In addition, the step of calculating the playing speed corresponding to the pressing pressure according to the pressing pressure, a preset playing double-speed gear range and the pressure detection range of the terminal specifically comprises the following steps: calculating a play speed corresponding to the pressing pressure by the following formula:
vt=((Nmax-Nmin)(pt-pmin)/(pmax-pmin)+Nmin)v0
wherein, v istFor the calculated play speed, the ptTo the pressing pressure, NmaxAnd said NminRespectively an upper limit value and a lower limit value of the playing speed doubling gear range, pmaxAnd said pminRespectively an upper limit value and a lower limit value of the pressure detection range, v0The playing speed corresponding to the playing speed with the speed doubling gear being 1. A specific formula is provided, and the playing speed corresponding to the pressing pressure can be conveniently and rapidly calculated.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a flowchart of a clipping method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a clipping method according to a second embodiment of the present invention;
FIG. 3 is a diagram of a starting video frame in a second embodiment according to the invention;
fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The first embodiment of the present invention relates to a clipping method, which is applied to an electronic device, wherein the electronic device may be a terminal such as a mobile phone or a computer, or a server, and the present invention is not limited in this embodiment. The following describes implementation details of the clipping method of the present embodiment in detail, and the following is provided only for easy understanding and is not necessary for implementing the present embodiment.
A flowchart of the clipping method in the present embodiment is shown in fig. 1, and specifically includes:
step 101: and acquiring the time period of the terminal which is continuously triggered.
Wherein, the terminal plays the video to be edited; the video to be edited can be an offline video cached in the terminal or an online video, and the online video can be an online on-demand video or an online live video. It should be noted that, in this embodiment, the online video may be directly edited without downloading the online video to the local and then editing the online video, so that the online video is edited more conveniently, simply and conveniently. In addition, the terminal can be a mobile phone, a notebook computer, a tablet computer and the like. For convenience of explanation, the terminal in this embodiment and the following embodiments is exemplified by a mobile phone, but the present invention is not limited to this. The handset being continuously triggered may be: the method comprises the following steps that the screen of the mobile phone is continuously pressed, a volume key on the side face of the mobile phone is continuously pressed, a preset virtual button on the screen of the mobile phone is continuously pressed, the body of the mobile phone is continuously positioned at a preset angle with the horizontal plane, the mobile phone continuously receives a preset voice instruction and the like; the preset virtual button, the preset angle and the preset voice indication can be set according to actual needs, and the embodiment is not particularly limited.
In one example, the mobile phone is continuously triggered, specifically, the mobile phone continuously receives a preset voice indication, and the preset voice indication may be a cheering sound or a laughing sound of the user. For example, when a user is watching a video, a continuous cheering sound may be emitted when a highlight occurs, and a continuous laughing sound may be emitted when a laughing segment occurs. The mobile phone can take the time period of receiving the continuous cheering or laughing as the time period which is continuously triggered, so that the video segments played in the time period can be clipped in the subsequent steps. That is to say, the mobile phone can automatically clip the video segment causing the user to make cheering or laughing, so that the user can conveniently review the automatically clipped wonderful segment or laughing segment at any time, and the user experience can be improved.
Specifically, the mobile phone may start timing when detecting that the mobile phone is just triggered, end timing when detecting that the mobile phone is stopped triggering, and a time period obtained by the timing is a time period during which the mobile phone is continuously triggered.
In a specific implementation, before the obtaining of the time period in which the terminal is continuously triggered is performed, it may be determined whether the terminal enters the clipping mode. In an example, when it is detected that a preset physical button or a virtual button is clicked during a process of playing a video by the terminal, the terminal may be considered to enter a clip mode, and the preset physical button or the virtual button may be set according to actual needs to remind a user that the button is pressed and then the terminal will enter the video clip mode, but this embodiment is not particularly limited to this. In another example, the terminal may be considered to enter the clipping mode if a voice instruction from the user indicating that the clipping mode may be entered is received while the terminal is playing a video. The voice command may be preset according to actual needs, and this embodiment is not particularly limited to this. It should be noted that, for convenience of description, the present embodiment provides two implementation manners for determining that the terminal enters the clipping mode, and in a specific implementation manner, the implementation manner for determining that the terminal enters the clipping mode is not limited to the above two manners.
In one example, the electronic device may be a terminal, and the terminal may acquire a time period for which the terminal is continuously triggered after detecting that the terminal enters the clip mode. In another example, the electronic device may be a server, and the terminal may acquire a time period during which the terminal is continuously triggered after detecting that the terminal enters the clip mode, and then transmit the acquired time period to the server.
Step 102: and determining the video segments of the video to be clipped played in the time period.
That is to say, in the process of playing the video to be clipped, it is determined which video segment in the video to be clipped is played in the time period in which the terminal is continuously triggered.
In one example, the time period is a time period during which the mobile phone continuously receives laughter or cheering of the user, and the video segment determined in the time period is a video segment causing the cheering or laughter of the user.
Step 103: and clipping the determined video segments to obtain clipped videos.
Specifically, the determined video segment is a video segment needing to be clipped, the time stamp of the start video frame of the video segment is extracted as a clipping start time point, the time stamp of the end video frame of the video segment is extracted as a clipping end time point, and the video segment between the clipping start time point and the clipping end time point is clipped to obtain a clipped video.
In one example, the clipped video may be automatically stored in a preset storage space in the terminal, for example, may be stored in a gallery in the mobile phone, so that the user can conveniently view the clipped video after the clipping is completed.
It is worth mentioning that the terminal can control the playing speed of the video to be clipped being played according to the pressure of the screen being pressed, so that the clipping speed can be controlled according to the actual requirement. Specifically, if it is detected that the screen of the terminal is pressed, the pressing pressure when the screen is pressed may be detected, and the playing speed corresponding to the pressing pressure is obtained according to the pressing pressure, so as to play the video to be edited at the obtained playing speed. For example, a pressure sensor may be disposed below a screen of the terminal, the pressure sensor detects the pressing pressure, a corresponding relationship between the pressing pressure and the playing speed may be pre-stored in the terminal, and the playing speed of the video being played is adjusted according to the detected pressing pressure at the current moment.
In one example, the manner of obtaining the play speed corresponding to the pressing pressure may be: and calculating the playing speed corresponding to the pressing pressure according to the pressing pressure, the preset speed doubling gear range and the pressure detection range of the terminal. The preset speed-doubling gear range can be set according to actual needs, and the embodiment is not particularly limited. For example, the preset speed range is [0.2, 5], which means that the playing speed of the video may be between 0.2 and 5 times. The pressure detection range of the terminal may be a pressure detection range of a pressure sensor built in the terminal.
In one example, the play speed corresponding to the pressing pressure may be calculated according to the following formula:
vt=((Nmax-Nmin)(pt-pmin)/(pmax-pmin)+Nmin)v0
wherein v istFor calculated play speed, ptTo pressing pressure, NmaxAnd NminUpper and lower limits, p, of the playing multiple speed gear range, respectivelymaxAnd pminUpper and lower limit values, v, of the pressure detection range, respectively0The playing speed corresponding to the playing speed with the speed doubling gear being 1. For example, a pressure sensor in the terminal can acquire the pressing pressure of the user in the tth second in real time and record the pressing pressure aspt,ptThe video playback speed between the tth second and the t +1 th second can be determined, i.e. in accordance with ptAnd playing the video between the tth second and the t +1 th second at the calculated playing speed.
In one example, the terminal being continuously triggered may specifically be that a screen of the terminal is continuously pressed. In this case, a video segment played during a period in which the screen is continuously pressed may be regarded as a video segment that needs to be clipped. The pressure of the screen being pressed can be used as a determining factor of the playing speed of the video to be edited being played by the terminal, and the larger the pressing pressure is, the faster the playing speed is.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, the method and the device have the advantages that when the video needs to be edited, the terminal can be continuously triggered, the video segment played by the terminal in the time period continuously triggered can be regarded as the determined segment needing to be edited, the determined segment needing to be edited can be edited, the user can conveniently edit the segment needing to be edited at any time in the process of watching the video, and convenience in editing is improved. Moreover, the segment to be clipped is determined according to the time period of continuous triggering of the terminal, manual attention is not needed, the progress bar of video playing is dragged back and forth, the starting time point and the ending time point of the clip are determined more easily and rapidly, and therefore the speed of clipping is increased. Moreover, the user can directly complete the clipping of the video segment played in the time period of continuously triggering the terminal through the continuously triggering terminal in the process of watching the online video without downloading the online video, so that the clipping convenience is further improved. In addition, in the embodiment, the video playing speed in the video clipping process can be controlled based on the pressure of the screen being pressed, which is beneficial to controlling the clipping speed according to actual needs.
A second embodiment of the present invention relates to a clipping method. The following describes implementation details of the clipping method of the present embodiment in detail, and the following is provided only for easy understanding and is not necessary for implementing the present embodiment.
A flowchart of the clipping method in the present embodiment is shown in fig. 2, and specifically includes:
step 201: acquiring the time period for which the screen of the terminal is continuously pressed.
For example, after entering the clip mode, the terminal may start timing from when a built-in pressure sensor detects that the screen is pressed, and stop timing when the screen is detected to stop pressing, so as to obtain a time period for which the terminal is continuously pressed.
Step 202: and determining the video segments of the video to be clipped played in the time period.
That is, a video clip of a video to be clipped played by the terminal during a period in which the terminal is continuously pressed is determined.
Step 203: a starting video frame of a video segment is obtained.
Specifically, the starting video frame of the video clip is a video picture played by the terminal when the terminal screen is just pressed, that is, a first frame video image of the video clip. The terminal can perform a frame dropping operation on the video segment, thereby intercepting the starting video frame.
Step 204: objects in the starting video frame that are located within the area where the screen is pressed are identified.
Specifically, the starting video frame may be subjected to instance segmentation based on a Mask-RCNN algorithm, that is, the first frame video image of the video segment determined in step 202 is subjected to instance segmentation to obtain the areas occupied by different objects in the first frame video image in the image; wherein, the different objects can be human beings, animals, plants, etc. Then, it can be recognized what object is specifically in the area where the screen is pressed in the first frame of video image, for example, what object is specifically to which the user's finger presses the touch point of the screen. It should be noted that, in this embodiment, the example segmentation of the starting video frame based on the Mask-RCNN algorithm is only used as an example, and the specific implementation is not limited thereto.
Step 205: and clipping each video frame containing the object in the video segment.
In one example, if the number of objects located in the area where the screen is pressed in the identified starting video frame is 1, the identified object may be taken as an object of interest to the user. During the period that the user continuously presses the screen, firstly, the face detection, the face recognition and the target detection can be carried out on each frame of video image played; the method comprises the steps of detecting a human face existing in a video image, identifying the detected human face, and detecting a target object except the human face. It is then determined whether each frame of video image contains an object of interest to the user. Finally, each video frame containing the object of interest to the user is clipped.
In one example, the appearance time point and the disappearance time point of the object of interest to the user may be determined according to whether each adjacent video frame in the video segment contains the object of interest to the user. For example, for any two consecutive video images, that is, each adjacent video frame, if the previous video image contains an object of interest to the user and the next video image does not contain an object of interest to the user, the timestamp of the next video image may be extracted and recorded as the vanishing time point. Similarly, if the previous frame of video image does not contain an object of interest to the user and the next frame of video image contains an object of interest to the user, the timestamp of the next frame of video image may be extracted and recorded as the point in time of occurrence. If multiple sets of appearance time points and disappearance time points are determined, which indicates that an object interested by a user appears and disappears in a video segment played during the continuous pressing period, a video frame played between each set of appearance time points and disappearance time points can be clipped. For example, a video frame from the first "appearance point" to the first "disappearance point" is cut into a video segment 1, a video frame from the second "appearance point" to the second "disappearance point" is cut into a video segment 2, and so on, which is beneficial to fast cutting to obtain a video segment containing an object of interest of the user.
In one example, when the number of the objects located in the area where the screen is pressed in the identified starting video frame is multiple, the area occupied by each object in the starting video frame may be acquired, and the object with the smallest area may be used as the target object. And then clipping each video frame containing the target object in the video segment. For example, referring to fig. 3, it is assumed that fig. 3 is a cut starting video frame, and an area actually pressed by the user is an a area. It can be seen that the object located in the area a in fig. 3 has a person and a dog, and the area occupied by the person and the area occupied by the dog in the figure can be obtained separately. As can be seen from fig. 3, the area occupied by the dog is the smallest of the two objects in the area a, and the dog can be used as the target object really interested by the user. Finally, when clipping, it is possible to identify each frame of video image played during the period in which the user continuously presses the screen, identify whether the dog in fig. 3 is included, and clip each frame of video including the dog in fig. 3, so that each frame of the clipped video includes the dog in fig. 3.
In another example, when the number of objects located in the area where the screen is pressed in the identified starting video frame is plural, the object in the front layer may be selected as the target object, and each video frame containing the target object in the video segment determined in step 202 may be clipped. The object in the front layer is more likely to be the object really interested by the user, so that the object in the front layer is selected as the target object, each video frame containing the target object is clipped, and the video segment really interested by the user is easier to be clipped.
In one example, the manner of selecting an object in the front layer as a target object may be as follows: determining the pixel color of the area occupied by each different object, extracting the pixel color of the overlapping area occupied by each different object, comparing the pixel color of the overlapping area with the pixel color of the area occupied by each different object, and taking the object in the area with the highest pixel color similarity with the overlapping area as the object in the layer close to the front layer. For example, referring to fig. 3, it is assumed that fig. 3 is a cut starting video frame, and an area actually pressed by the user is an a area. It can be seen that the object located in the area a in fig. 3 has a person and a dog, and the pixel colors of the areas occupied by the person and the dog in the figure can be respectively obtained, and it is assumed that the pixel color of the area occupied by the person is black, and the pixel color of the area occupied by the dog is brown. As can be seen from fig. 3, there is an overlapping area in the area occupied by the person and the dog, and the pixel color of the overlapping area of the person can be extracted, assuming that the extracted pixel color of the overlapping area is brown. In summary, it can be found that the pixel color similarity between the overlapping region and the region where the dog is located is higher, and then the dog in fig. 3 can be used as the object in the selected layer closer to the front layer, that is, the dog in fig. 3 can be used as the finally determined target object.
It should be noted that, in this embodiment, the above two examples are provided only to illustrate how to determine one target object among a plurality of objects, but the specific implementation is not limited to the above two examples. Any manner of determining a target object among a plurality of objects is within the scope of the present embodiment.
It should be noted that the above examples in the present embodiment are merely illustrative for easy understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, in the embodiment, the continuous trigger on the terminal can be embodied as continuous pressing on the screen of the terminal, at the moment when the screen is just pressed, the object in the pressing area in the video is taken as the object interested by the user, namely the target object, and the video frames in the clipped video clip are all the video frames containing the target object, so that the video clip containing the object interested by the user can be conveniently and accurately clipped. In addition, considering the possibility of object overlapping, namely, a plurality of objects in the pressed area in the initial video frame are provided, and the object with the smallest area is most likely to be the object really interested by the user, therefore, the object with the smallest area is taken as the target object, and each video frame containing the target object is clipped, which is more beneficial to clipping the video really interested by the user. Moreover, the embodiment also provides another method for determining the target object which is possibly interested in the user among the multiple objects in a mode of similarity of pixel colors, so that the implementation mode of the embodiment is flexible and various.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to an electronic device, as shown in fig. 4, including at least one processor 301; and a memory 302 communicatively coupled to the at least one processor 301; the memory 302 stores instructions executable by the at least one processor 301, and the instructions are executed by the at least one processor 301 to enable the at least one processor 301 to execute the clipping method according to the first or second embodiment.
Where the memory 302 and the processor 301 are coupled in a bus, the bus may comprise any number of interconnected buses and bridges, the buses coupling one or more of the various circuits of the processor 301 and the memory 302. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 301 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 301.
The processor 301 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 302 may be used to store data used by processor 301 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (8)

1. A clipping method, comprising:
acquiring a time period for which a screen of the terminal is continuously pressed; wherein the terminal plays the video to be edited;
determining a video segment of the video to be clipped played in the time period;
acquiring a starting video frame of the video clip;
identifying an object in the starting video frame that is located within the area where the screen is pressed;
if the number of the objects in the pressed area of the screen in the starting video frame is multiple, determining the pixel color of the area occupied by each object;
extracting pixel colors of an overlapping area occupied by each object;
comparing the pixel color of the overlapping area with the pixel color of the area occupied by each object, and taking the object in the area with the highest pixel color similarity with the overlapping area as a target object;
and clipping each video frame containing the target object in the video segment to obtain a clipped video.
2. The clipping method according to claim 1, wherein the video to be clipped is: and (4) online video.
3. The clipping method according to claim 1, wherein the clipping video frames of the video segment containing the object comprises:
determining the appearance time point and the disappearance time point of the object according to whether each adjacent video frame in the video clip contains the object;
and if the plurality of groups of appearance time points and disappearance time points are determined, editing the video frames played between each group of appearance time points and disappearance time points.
4. The clipping method according to claim 1, further comprising:
if the screen of the terminal is detected to be pressed, obtaining the pressing pressure when the screen is pressed;
acquiring a playing speed corresponding to the pressing pressure according to the pressing pressure;
and playing the video to be edited at the acquired playing speed.
5. The editing method according to claim 4, wherein the obtaining of the playing speed corresponding to the pressing pressure according to the pressing pressure specifically includes:
and calculating the playing speed corresponding to the pressing pressure according to the pressing pressure, a preset speed doubling gear range and the pressure detection range of the terminal.
6. The editing method according to claim 5, wherein the calculating of the playing speed corresponding to the pressing pressure according to the pressing pressure, a preset playing double-speed gear range and a pressure detection range of the terminal specifically comprises:
calculating a play speed corresponding to the pressing pressure by the following formula:
vt=((Nmax-Nmin)(pt-pmin)/(pmax-pmin)+Nmin)v0
wherein, v istFor the calculated play speed, the ptTo the pressing pressure, NmaxAnd said NminRespectively an upper limit value and a lower limit value of the playing speed doubling gear range, pmaxAnd said pminRespectively an upper limit value and a lower limit value of the pressure detection range, v0The playing speed corresponding to the playing speed with the speed doubling gear being 1.
7. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the clipping method according to any one of claims 1 to 6.
8. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the clipping method of any one of claims 1 to 6.
CN201910800267.1A 2019-08-28 2019-08-28 Clipping method, electronic device, and computer-readable storage medium Active CN110703976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910800267.1A CN110703976B (en) 2019-08-28 2019-08-28 Clipping method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910800267.1A CN110703976B (en) 2019-08-28 2019-08-28 Clipping method, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110703976A CN110703976A (en) 2020-01-17
CN110703976B true CN110703976B (en) 2021-04-13

Family

ID=69193539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910800267.1A Active CN110703976B (en) 2019-08-28 2019-08-28 Clipping method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110703976B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447505B (en) * 2020-03-09 2022-05-31 咪咕文化科技有限公司 Video clipping method, network device, and computer-readable storage medium
CN111524076B (en) * 2020-04-07 2023-07-21 咪咕文化科技有限公司 Image processing method, electronic device, and computer-readable storage medium
CN111556328A (en) 2020-04-17 2020-08-18 北京达佳互联信息技术有限公司 Program acquisition method and device for live broadcast room, electronic equipment and storage medium
CN113115106B (en) * 2021-03-31 2023-05-05 影石创新科技股份有限公司 Automatic editing method, device, terminal and storage medium for panoramic video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307051A (en) * 2015-05-04 2016-02-03 维沃移动通信有限公司 Video processing method and device
CN105657537A (en) * 2015-12-23 2016-06-08 小米科技有限责任公司 Video editing method and device
CN108604378A (en) * 2015-11-30 2018-09-28 斯纳普公司 The image segmentation of video flowing and modification
CN109076263A (en) * 2017-12-29 2018-12-21 深圳市大疆创新科技有限公司 Video data handling procedure, equipment, system and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160366330A1 (en) * 2015-06-11 2016-12-15 Martin Paul Boliek Apparatus for processing captured video data based on capture device orientation
CN106341725A (en) * 2016-09-27 2017-01-18 北京小米移动软件有限公司 Video message processing method and device for electronic apparatus
CN106970762A (en) * 2017-03-31 2017-07-21 联想(北京)有限公司 A kind of method for processing video frequency and electronic equipment
CN107124662B (en) * 2017-05-10 2022-03-18 腾讯科技(上海)有限公司 Video live broadcast method and device, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307051A (en) * 2015-05-04 2016-02-03 维沃移动通信有限公司 Video processing method and device
CN108604378A (en) * 2015-11-30 2018-09-28 斯纳普公司 The image segmentation of video flowing and modification
CN105657537A (en) * 2015-12-23 2016-06-08 小米科技有限责任公司 Video editing method and device
CN109076263A (en) * 2017-12-29 2018-12-21 深圳市大疆创新科技有限公司 Video data handling procedure, equipment, system and storage medium

Also Published As

Publication number Publication date
CN110703976A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110703976B (en) Clipping method, electronic device, and computer-readable storage medium
US11237717B2 (en) Information processing device and information processing method
US9786326B2 (en) Method and device of playing multimedia and medium
CN107801096B (en) Video playing control method and device, terminal equipment and storage medium
CN110225369B (en) Video selective playing method, device, equipment and readable storage medium
CN109168037B (en) Video playing method and device
US9342516B2 (en) Media presentation playback annotation
CN108337532A (en) Perform mask method, video broadcasting method, the apparatus and system of segment
CN111147955B (en) Video playing method, server and computer readable storage medium
CN111131884B (en) Video clipping method, related device, equipment and storage medium
CN111371988B (en) Content operation method, device, terminal and storage medium
US10148993B2 (en) Method and system for programmable loop recording
US10257436B1 (en) Method for using deep learning for facilitating real-time view switching and video editing on computing devices
CN112887480B (en) Audio signal processing method and device, electronic equipment and readable storage medium
CN110691281B (en) Video playing processing method, terminal device, server and storage medium
EP2860968A1 (en) Information processing device, information processing method, and program
US20110096994A1 (en) Similar image retrieval system and similar image retrieval method
CN104995639A (en) Terminal and method for managing video file
CN111182359A (en) Video preview method, video frame extraction method, video processing device and storage medium
WO2020033612A1 (en) Event recording system and method
US20220328071A1 (en) Video processing method and apparatus and terminal device
CN114025242A (en) Video processing method, video processing device and electronic equipment
JP6214762B2 (en) Image search system, search screen display method
CN109101964A (en) Determine the method, equipment and storage medium in head and the tail region in multimedia file
CN115988259A (en) Video processing method, device, terminal, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant