CN109361956B - Time-based video cropping methods and related products - Google Patents

Time-based video cropping methods and related products Download PDF

Info

Publication number
CN109361956B
CN109361956B CN201811400191.5A CN201811400191A CN109361956B CN 109361956 B CN109361956 B CN 109361956B CN 201811400191 A CN201811400191 A CN 201811400191A CN 109361956 B CN109361956 B CN 109361956B
Authority
CN
China
Prior art keywords
video
region
lines
time
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811400191.5A
Other languages
Chinese (zh)
Other versions
CN109361956A (en
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Baduo Media Technology Co.,Ltd.
Original Assignee
Guangxi Baduo Media Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Baduo Media Technology Co ltd filed Critical Guangxi Baduo Media Technology Co ltd
Priority to CN201811400191.5A priority Critical patent/CN109361956B/en
Publication of CN109361956A publication Critical patent/CN109361956A/en
Application granted granted Critical
Publication of CN109361956B publication Critical patent/CN109361956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides a time-based video cropping method and related products, the method comprising the steps of: the mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file; the mobile terminal receives a splicing request selected by a user, wherein the splicing request comprises a plurality of spliced time periods; and the mobile terminal splices the multiple sections of videos in the multiple time periods according to the time sequence to obtain a second video. The technical scheme provided by the application has the advantage of high user experience.

Description

Time-based video cropping methods and related products
Technical Field
The invention relates to the technical field of culture media, in particular to a time-based video cutting method and a related product.
Background
The short video is short video, which is an internet content transmission mode, and is generally video transmission content transmitted on new internet media within 1 minute.
The short video can only be compressed after being shot, and the video cannot be cut as the video is cut, so that the effect of the short video is influenced, and the experience degree of a user is influenced.
Disclosure of Invention
The embodiment of the invention provides a time-based video cutting method and a related product, which can cut shot videos and have the advantage of improving user experience.
In a first aspect, an embodiment of the present invention provides a method for time-based video cropping, where the method includes the following steps:
the mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file;
the mobile terminal receives a splicing request selected by a user, wherein the splicing request comprises a plurality of spliced time periods;
and the mobile terminal splices the multiple sections of videos in the multiple time periods according to the time sequence to obtain a second video.
Optionally, the splicing, by the mobile terminal, the multiple segments of videos in multiple time periods according to the time sequence to obtain the second video specifically includes:
determining a plurality of time periods, cutting the first video according to the plurality of time periods to form a plurality of sections of videos, numbering according to the sequence of the time periods, caching, and splicing the cached plurality of sections of videos from small to large according to the number to obtain a second video.
Optionally, the method further includes:
and compressing the second video into a third video corresponding to the set time according to the compression time set by the user, and uploading the third video.
Optionally, the method further includes:
and determining a second flow smoothness of the second video and a first flow smoothness of the first video, and deleting the second video if the first flow smoothness is greater than the second flow smoothness.
In a second aspect, a terminal is provided, which includes: a camera, a processor and a display screen,
the camera is used for executing short video shooting to obtain a first video file when the display screen acquires a short video shooting request of a user;
the processor is used for extracting background music of the first video file and determining a first time period and a second time period corresponding to the same paragraph in the background music;
the display screen is also used for receiving a splicing request selected by a user,
the processor is further configured to determine a time period for the splice; and splicing the multiple sections of videos in the multiple time periods according to the time sequence to obtain a second video.
Optionally, the processor is specifically configured to determine multiple time periods, cut the first video according to the multiple time periods to form multiple segments of videos, number the multiple segments of videos according to the sequence of the time periods, cache the multiple segments of videos, and then splice the multiple segments of videos cached from small to large according to the numbers to obtain the second video.
Optionally, the processor is further configured to compress the second video into a third video corresponding to the set time according to the compression time set by the user, and upload the third video.
Optionally, the processor is further configured to determine a second flow smoothness of the second video and a first flow smoothness of the first video, and delete the second video if the first flow smoothness is greater than the second flow smoothness.
Optionally, the terminal is: tablet computers or smart phones.
In a third aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
according to the technical scheme, after the shot first video file is determined, the splicing request of the user for the first video is received, the first video is cut into the multiple sections of videos according to the multiple time periods corresponding to the splicing request, and then the multiple sections of videos are combined to obtain the second video.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a mobile terminal.
Fig. 2 is a flow chart diagram of a time-based video cropping method.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a mobile terminal, which may specifically be a smart phone, where the smart phone may be a mobile terminal of an IOS system, an android system, and the mobile terminal may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection.
Referring to fig. 2, fig. 2 provides a time-based video cropping method, which is performed by the mobile terminal shown in fig. 1, as shown in fig. 2, and which includes the following steps:
step S201, a mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file;
step S202, the mobile terminal receives a splicing request selected by a user, wherein the splicing request comprises a plurality of spliced time periods;
and S203, splicing the multiple sections of videos in the multiple time periods by the mobile terminal according to the time sequence to obtain a second video.
According to the technical scheme, after the shot first video file is determined, the splicing request of the user for the first video is received, the first video is cut into the multiple sections of videos according to the multiple time periods corresponding to the splicing request, then the multiple sections of videos are combined to obtain the second video, so that the time-based video cutting is achieved, and the user experience is improved.
The implementation method of step S203 may specifically include:
determining a plurality of time periods, cutting the first video according to the plurality of time periods to form a plurality of sections of videos, numbering according to the sequence of the time periods, caching, and splicing the cached plurality of sections of videos from small to large according to the number to obtain a second video.
Optionally, the method may further include:
and compressing the second video into a third video corresponding to the set time according to the compression time set by the user, and uploading the third video.
Optionally, the method may further include:
and determining a second flow smoothness of the second video and a first flow smoothness of the first video, and deleting the second video if the first flow smoothness is greater than the second flow smoothness.
The method for determining fluency may specifically be that adjacent frames of the first video are compared to determine whether the adjacent frames are the first number of consistent image frames, and adjacent frames of the second video are compared to determine whether the adjacent frames are the second number of consistent image frames, and if the first number is smaller than the second number, it is determined that the first fluency is greater than the second fluency.
The principle is based on the principle that for video shooting, the more consecutive the motion is, the smaller the probability that the adjacent frames are consistent, whereas if there is an error, the person has a certain pause at the time of the erroneous motion, so that the same adjacent frames occur, and the more such cases, the poorer the fluency is.
Optionally, the method may further include:
and carrying out flattening processing on the video frames with double chin in the second video.
The above-mentioned flattening processing of the video frame with double chin may specifically include:
extracting a pth video frame of a second video, obtaining a pth video frame line, determining a region with 2 adjacent line lines and the distance between the 2 line lines being less than a set distance as a region to be determined, constructing equidistant lines of lambda line lines in the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the two lines are consistent, if so, determining that the two lines are consistent (if the two lines are non-double chin regions, the pictures are consistent, the picture comparison method can be realized through the existing comparison method), determining that the region to be determined is non-double chin regions, if not, determining that the two lines are different (if the two lines are non-uniform in distribution of meat), determining that the region to be determined is double chin regions, constructing upper equidistant lines in the upper regions of the double chin regions, extracting a gamma picture between the upper equidistant lines and the upper lines, extending the gamma picture downwards by lambda-1 (namely, each equidistant line is replaced by the gamma picture), and replacing the double chin leveling regions to complete the processing.
The obtaining of the pth video frame texture line may specifically include:
marking the RGB values of all pixel points of the pth video frame into different colors, determining different color line segments in the same color, if the number of the different color line segments in the same color exceeds 2, obtaining the distance between the different color line segments, and if the distance is smaller than a set distance threshold, determining the different color line segments as grain lines.
For the line path, it divides the skin into a plurality of areas, the RGB value of the areas is the same because it is the skin, then when marking with color, the color of the skin is the same color, but for the line path, because of its folding and light, the line path is different from the RGB value of the skin, because of marking different color, the color of the line path is inconsistent with the color of the skin area, and it is a line segment, in addition, the line path with double chin has at least 2 or more, therefore the number is also more than 2. The grain lines can be distinguished by this method.
Referring to fig. 3, fig. 3 provides a terminal including: a camera, a processor and a display screen,
the camera is used for executing short video shooting to obtain a first video file when the display screen acquires a short video shooting request of a user;
the processor is used for extracting background music of the first video file and determining a first time period and a second time period corresponding to the same paragraph in the background music;
the display screen is also used for receiving a splicing request selected by a user,
the processor is further configured to determine a time period for the splice; and splicing the multiple sections of videos in the multiple time periods according to the time sequence to obtain a second video.
Embodiments of the present invention also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the time-based video cutting methods as recited in the above method embodiments.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the time-based video cropping methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for time-based video cropping, said method comprising the steps of:
the mobile terminal receives a short video shooting request of a user and executes short video shooting to obtain a first video file;
the mobile terminal receives a splicing request selected by a user, wherein the splicing request comprises a plurality of spliced time periods;
the mobile terminal splices the multiple videos in multiple time periods according to the time sequence to obtain a second video;
carrying out flattening processing on video frames with double chin in the second video; the method specifically comprises the following steps:
extracting a pth video frame of a second video, obtaining a pth video frame line route, determining a region with adjacent 2 line lines and the distance between the 2 line lines being less than a set distance as a region to be determined, constructing equidistant lines of lambda line routes for the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the two lines are consistent or not, determining that the region to be determined is a non-double-chin region if the two lines are inconsistent, determining that the region to be determined is a double-chin region if the two lines are inconsistent, constructing an upper equidistant line in the upper region of the double-chin region, extracting a gamma picture between the upper equidistant line and the upper line, extending the gamma picture downwards by lambda-1, and replacing the double-chin region to finish the leveling process.
2. The method according to claim 1, wherein the step of splicing the multiple segments of videos in the multiple time segments according to the time sequence by the mobile terminal to obtain the second video specifically comprises:
determining a plurality of time periods, cutting the first video according to the plurality of time periods to form a plurality of sections of videos, numbering according to the sequence of the time periods, caching, and splicing the cached plurality of sections of videos from small to large according to the number to obtain a second video.
3. The method of claim 1, further comprising:
and compressing the second video into a third video corresponding to the set time according to the compression time set by the user, and uploading the third video.
4. The method of claim 3, further comprising:
and determining a second flow smoothness of the second video and a first flow smoothness of the first video, and deleting the second video if the first flow smoothness is greater than the second flow smoothness.
5. A terminal, the terminal comprising: a camera, a processor and a display screen, which is characterized in that,
the camera is used for executing short video shooting to obtain a first video file when the display screen acquires a short video shooting request of a user;
the processor is used for extracting background music of the first video file and determining a first time period and a second time period corresponding to the same paragraph in the background music;
the display screen is also used for receiving a splicing request selected by a user,
the processor is further configured to determine a time period for the splice; splicing the multiple sections of videos in multiple time periods according to the time sequence to obtain a second video;
carrying out flattening processing on video frames with double chin in the second video; the method specifically comprises the following steps:
extracting a pth video frame of a second video, obtaining a pth video frame line route, determining a region with adjacent 2 line lines and the distance between the 2 line lines being less than a set distance as a region to be determined, constructing equidistant lines of lambda line routes for the region to be determined, extracting pictures between the equidistant lines, comparing and determining whether the two lines are consistent or not, determining that the region to be determined is a non-double-chin region if the two lines are inconsistent, determining that the region to be determined is a double-chin region if the two lines are inconsistent, constructing an upper equidistant line in the upper region of the double-chin region, extracting a gamma picture between the upper equidistant line and the upper line, extending the gamma picture downwards by lambda-1, and replacing the double-chin region to finish the leveling process.
6. The terminal of claim 5,
the processor is specifically configured to determine multiple time periods, cut the first video according to the multiple time periods to form multiple segments of videos, buffer the videos after numbering the time periods in sequence, and then splice the buffered multiple segments of videos from small to large according to the numbers to obtain a second video.
7. The terminal of claim 5,
and the processor is also used for compressing the second video into a third video corresponding to the set time according to the compression time set by the user and uploading the third video.
8. The terminal of claim 5,
the processor is further configured to determine a second flow smoothness of the second video and a first flow smoothness of the first video, and delete the second video if the first flow smoothness is greater than the second flow smoothness.
9. A terminal according to any of claims 5-8,
the terminal is as follows: tablet computers or smart phones.
10. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to perform the method as provided in any one of claims 1-4.
CN201811400191.5A 2018-11-22 2018-11-22 Time-based video cropping methods and related products Active CN109361956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811400191.5A CN109361956B (en) 2018-11-22 2018-11-22 Time-based video cropping methods and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811400191.5A CN109361956B (en) 2018-11-22 2018-11-22 Time-based video cropping methods and related products

Publications (2)

Publication Number Publication Date
CN109361956A CN109361956A (en) 2019-02-19
CN109361956B true CN109361956B (en) 2021-04-30

Family

ID=65338339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811400191.5A Active CN109361956B (en) 2018-11-22 2018-11-22 Time-based video cropping methods and related products

Country Status (1)

Country Link
CN (1) CN109361956B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10192583B2 (en) * 2014-10-10 2019-01-29 Samsung Electronics Co., Ltd. Video editing using contextual data and content discovery using clusters
CN104883607B (en) * 2015-06-05 2017-12-19 广东欧珀移动通信有限公司 A kind of video interception or the method, apparatus and mobile device of shearing
CN106791933B (en) * 2017-01-20 2019-11-12 杭州当虹科技股份有限公司 The method and system of online quick editor's video based on web terminal
CN107256117A (en) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 The method and its mobile terminal of a kind of video editing

Also Published As

Publication number Publication date
CN109361956A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN106657197B (en) File uploading method and device
CN110121098B (en) Video playing method and device, storage medium and electronic device
CN111553362B (en) Video processing method, electronic device and computer readable storage medium
CN107147939A (en) Method and apparatus for adjusting net cast front cover
CN108492338B (en) Compression method and device for animation file, storage medium and electronic device
US20180373736A1 (en) Method and apparatus for storing resource and electronic device
CN107295352B (en) Video compression method, device, equipment and storage medium
CN110876078B (en) Animation picture processing method and device, storage medium and processor
CN111158619A (en) Picture processing method and device
CN115150371B (en) Service processing method, system and storage medium based on cloud platform
CN112532998B (en) Method, device and equipment for extracting video frame and readable storage medium
CN108958592B (en) Video processing method and related product
CN115103175B (en) Image transmission method, device, equipment and medium
WO2018216929A1 (en) Methods and systems for saving data while streaming video
CN114598919A (en) Video processing method, video processing device, computer equipment and storage medium
CN109361956B (en) Time-based video cropping methods and related products
CN112929728A (en) Video rendering method, device and system, electronic equipment and storage medium
CN115019138A (en) Video subtitle erasing, model training and interaction method, device and storage medium
CN109640170B (en) Speed processing method of self-shooting video, terminal and storage medium
CN109547850B (en) Video shooting error correction method and related product
CN113989442B (en) Building information model construction method and related device
CN113095058B (en) Method and device for processing page turning of streaming document, electronic equipment and storage medium
CN109242763B (en) Picture processing method, picture processing device and terminal equipment
CN113870093A (en) Image caching method and device, electronic equipment and storage medium
CN109784226B (en) Face snapshot method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210402

Address after: 3075, 3rd floor, building 5, plot 4, Tianyu garden, No.16 Qinglin Road, Nanning District, Nanning City, Guangxi Zhuang Autonomous Region

Applicant after: Guangxi Baduo Media Technology Co.,Ltd.

Address before: 518003 4K, building B, jinshanghua, No.45, Jinlian Road, Huangbei street, Luohu District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN YIDA CULTURE MEDIA Co.,Ltd.

GR01 Patent grant
GR01 Patent grant