CN106412702B - Video clip intercepting method and device - Google Patents

Video clip intercepting method and device Download PDF

Info

Publication number
CN106412702B
CN106412702B CN201510448280.7A CN201510448280A CN106412702B CN 106412702 B CN106412702 B CN 106412702B CN 201510448280 A CN201510448280 A CN 201510448280A CN 106412702 B CN106412702 B CN 106412702B
Authority
CN
China
Prior art keywords
video
target
file
video data
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510448280.7A
Other languages
Chinese (zh)
Other versions
CN106412702A (en
Inventor
陈俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510448280.7A priority Critical patent/CN106412702B/en
Priority to PCT/CN2016/085994 priority patent/WO2017016339A1/en
Priority to MYPI2017704144A priority patent/MY190923A/en
Publication of CN106412702A publication Critical patent/CN106412702A/en
Priority to US15/729,439 priority patent/US10638166B2/en
Application granted granted Critical
Publication of CN106412702B publication Critical patent/CN106412702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network

Abstract

The embodiment of the invention discloses a video clip intercepting method and a terminal, which are used for improving the video clip intercepting processing efficiency. In the video image capturing method provided by the embodiment of the invention, a video capturing instruction sent by a user through a current playing terminal is received; starting from the moment that the playing time is the interception starting time point, acquiring decoded video data corresponding to the video interception area in the video file currently being played according to the video interception instruction, and stopping acquiring the decoded video data corresponding to the video interception area in the video file currently being played until the playing time is the interception ending time point; starting from the interception ending time point, carrying out file format coding on the obtained decoded video data according to the video interception instruction, and generating a video segment intercepted from the video file; outputting the video clip according to the target use.

Description

Video clip intercepting method and device
Technical Field
The invention relates to the technical field of computers, in particular to a video clip intercepting method and device.
Background
In recent years, multimedia information technology has been rapidly developed, and users are more and more accustomed to handheld terminals to play videos, and when the users are interested in a certain segment of the video in the process of watching the video, the users have a need to intercept a segment of the video from the played video and store the segment of the video. However, in the prior art, there is a technical scheme of image capturing, and a user can submit a screenshot command to a terminal, the terminal needs to stop a currently played video and store the currently stopped video image.
In order to realize the interception of video segments, chinese patent application publication No. CN103747362A discloses a method and an apparatus for intercepting video segments, wherein the method for intercepting video segments disclosed in the patent comprises the following steps: receiving an interception start command, and acquiring a video image which is being played from a currently played video according to preset video interception parameters; intercepting the acquired video image to obtain a screenshot of the acquired video image; and receiving an interception ending command, and generating a first video clip from the screenshot of each video image according to the order of intercepting each video image and the preset video interception parameters. In the technical scheme in the prior art, a plurality of video images are intercepted, and the video images of the team leader are combined according to the sequence of the intercepted video images to finally obtain the intercepted video clip.
The inventor of the invention finds out that: the existing scheme combines the screenshots of a plurality of video images to obtain the intercepted video clip, the video clip intercepting method is only suitable for the video clip which needs to be intercepted for a short time and is only combined by a few video images, and if the user needs to intercept the video clip with a large time span, the video images with a large number of screenshots are required according to the method, so the intercepting efficiency of the video clip is very low. In addition, in the above-mentioned prior art, how to control the frequency of the captured video images is difficult to grasp, if the frequency of the captured video images is low, the playing of the combined video segments will be interrupted, and if the frequency of the captured video images is high, many video images will be captured, which is complicated to process.
Disclosure of Invention
The embodiment of the invention provides a video clip intercepting method and a video clip intercepting device, which are used for improving the video clip intercepting processing efficiency.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for intercepting a video segment, including:
receiving a video interception instruction sent by a user through a current playing terminal, wherein the video interception instruction comprises: the video capturing method comprises the steps that the user determines a capturing starting time point and a capturing ending time point of a video to be captured, a video capturing area defined in a playing interface of the current playing terminal by the user and a target use selected by the user;
starting from the moment that the playing time is the interception starting time point, acquiring decoded video data corresponding to the video interception area in the video file currently being played according to the video interception instruction, and stopping acquiring the decoded video data corresponding to the video interception area in the video file currently being played until the playing time is the interception ending time point;
starting from the interception ending time point, carrying out file format coding on the obtained decoded video data according to the video interception instruction, and generating a video segment intercepted from the video file;
outputting the video clip according to the target use.
In a second aspect, an embodiment of the present invention further provides an apparatus for intercepting a video segment, including:
the receiving module is used for receiving a video intercepting instruction sent by a user through a current playing terminal, and the video intercepting instruction comprises: the video capturing method comprises the steps that the user determines a capturing starting time point and a capturing ending time point of a video to be captured, a video capturing area defined in a playing interface of the current playing terminal by the user and a target use selected by the user;
the video data acquisition module is used for acquiring decoded video data corresponding to the video interception area in the video file which is currently played according to the video interception instruction from the moment when the playing time is the interception starting time point until the playing time is the interception ending time point, and stopping acquiring the decoded video data corresponding to the video interception area in the video file which is currently played;
the file coding module is used for carrying out file format coding on the obtained decoded video data according to the video intercepting instruction from the intercepting ending time point to generate a video segment intercepted from the video file;
and the video clip output module is used for outputting the video clip according to the target purpose.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, when a user sends a video intercepting instruction through a current playing terminal, the video intercepting instruction is firstly received, the video intercepting instruction can comprise an intercepting start time point and an intercepting end time point, a video intercepting area defined by the user and a target purpose selected by the user, after a playing interface in the terminal starts playing a video file, decoded video data corresponding to the video intercepting area in the video file currently being played can be obtained after the playing time reaches the intercepting start time point, the decoded video data corresponding to the video intercepting area in the video file currently being played still needs to be continuously obtained before the intercepting end time point is not reached, a plurality of decoded video data can be obtained according to the video intercepting instruction, and after the intercepting end time point reaches, the obtained decoded video data is subjected to file format coding according to the video intercepting instruction, therefore, the video clip cut out from the video file can be generated, and the cut-out video clip can be output according to the target purpose selected by the user after being generated. In the invention, the video segment needing to be intercepted is obtained by acquiring the decoded video data corresponding to the video file being played and then carrying out file format coding on the decoded video data, rather than obtaining the video segment by capturing a plurality of video images to combine.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings.
Fig. 1 is a schematic flowchart of a video segment capturing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an obtaining manner of a video capture area according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an intercepting process of a video clip according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a process for decoding video data according to an embodiment of the present invention;
fig. 5-a is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 5-b is a schematic structural diagram of a video data acquisition module according to an embodiment of the present invention;
fig. 5-c is a schematic structural diagram of another apparatus for capturing video segments according to an embodiment of the present invention;
fig. 5-d is a schematic structural diagram of another apparatus for capturing video segments according to an embodiment of the present invention;
fig. 5-e is a schematic structural diagram of another apparatus for capturing video segments according to an embodiment of the present invention;
fig. 5-f is a schematic structural diagram of another video segment capturing device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a composition of a terminal to which the video clip intercepting method according to the embodiment of the present invention is applied.
Detailed Description
The embodiment of the invention provides a video clip intercepting method and a video clip intercepting device, which are used for improving the video clip intercepting processing efficiency.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one skilled in the art from the embodiments given herein are intended to be within the scope of the invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The following are detailed below.
Referring to fig. 1, an embodiment of the method for capturing a video clip according to the present invention can be specifically applied to a scene where a video clip needs to be captured in a terminal for playing a video, and the method for capturing a video clip according to an embodiment of the present invention includes the following steps:
101. and receiving a video interception instruction sent by a user through the current playing terminal.
Wherein, the video intercepting instruction comprises: the video capturing method comprises the following steps of capturing a starting time point and a capturing ending time point, a video capturing area defined in a playing interface of a current playing terminal by a user and a target use selected by the user.
In the embodiment of the invention, when a user plays a video at an operation terminal and watches a video which feels thank for interest, the user can operate a video intercepting button on the terminal so as to trigger the terminal to execute the intercepting of a video segment. Without limitation, in the embodiment of the present invention, a user may directly determine the time length of a video to be intercepted, and then the user may send a video interception instruction to the terminal, where the video interception instruction includes an interception start time point and an interception end time point, the terminal may determine from which time point the video is intercepted, and how long the video is intercepted, and the time length of the video to be intercepted may be determined by the interception start time point and the interception end time point.
In addition, in the embodiment of the present invention, when a user needs to intercept a partial picture area of a playing interface of a current playing terminal, but does not need to intercept a video picture of a whole playing interface, the user equipment may define a video interception area in the playing interface of the current playing terminal, and then does not intercept the picture outside the video interception area, and at this time, the user equipment may also carry the video interception area defined by the user from the playing interface in the video interception instruction. In addition, the user can select a target purpose through the video capturing instruction to indicate that the video clip captured by the terminal can output the video clip according to a specific target purpose, for example, the user archives the captured video clip, or shares the captured video clip in a QQ space or WeChat after the video clip is archived, and the target purpose indicates the specific purpose of the video clip which the user needs to output, so that the video clip obtained by video capturing in the invention can meet the requirement of the user on the target purpose.
In some embodiments of the present invention, the video capture instruction sent to the terminal by the user may include other information that the user needs to indicate the terminal, in addition to the capture start time point, the capture end time point, the video capture area defined by the user, and the target use selected by the user, for example, the user may indicate that the terminal should output a video clip meeting what video parameter requirement, that is, in the present invention, the corresponding video clip may be further output to the captured and output video clip according to the video parameter required by the user, so that more requirements of the user on the captured video may be met.
Specifically, in some embodiments of the present invention, the video capture instruction may further specifically include a target file format selected by the user, that is, the user may instruct the terminal to output a video clip whose video parameter is in the target file format, where the file format refers to a file format of the video file itself, for example, the file format may be MP4, mkv, and the target file format indicates a specific file format that the user needs to output, and then the video clip obtained by video capture in the present invention may meet the requirement of the user on the target file format.
In some embodiments of the present invention, the video capture instruction may further specifically include a target resolution selected by the user, that is, the user may instruct the terminal to output a video clip whose video parameter is the target resolution, where the resolution refers to a setting of how much information is displayed in the video file, and usually, the width and the height are both in 16 times as a step unit, for example, 16 × n (n is 1,2, 3.. eta.), for example, 176 × 144, 352 × 288, and the like, and the target resolution indicates a specific resolution that the user needs to output, and the video clip obtained by video capture in the present invention may meet the user's requirement for the target resolution.
In some embodiments of the present invention, the video capture instruction may further specifically include a target video format selected by the user, that is, the user may instruct the terminal to output a video segment whose video parameter is the target video format, where the video format refers to a video content encoding format of a video file, for example, H264, and the target video format indicates a specific video format that the user needs to output, and then the video segment obtained by video capture in the present invention may meet the requirement of the user on the target video format.
In some embodiments of the present invention, the video capture instruction may further specifically include a target video quality selected by the user, that is, the user may instruct the terminal to output a video segment whose video parameter is the target video quality, where the video quality refers to a video transmission level requirement of a video file, and may represent complexity of a video format, for example, the video quality is divided into 3 levels or 5 levels, the user may select a required target video quality as level iii, and the target video quality indicates a specific video quality level that the user needs to output, and then the video segment obtained by video capture in the present invention may meet the requirement of the user on the target video quality. It should be noted that, in the present invention, the video quality may also include other parameters of the video, for example, the video quality may be used to represent the number of frames between key frames in a group of pictures (gop) of the video, the video quality may be used to represent a quantization coefficient (qp) of the video, which may determine the encoding compression rate and image precision of a quantizer, and the video quality may also be used to represent the configuration of the video, for example, the configuration includes main setting indexes such as baseline, main, high, and the like.
In some embodiments of the present invention, the video capture instruction may further specifically include a target video frame rate selected by the user, that is, the user may instruct the terminal to output a video segment whose video parameter is the target video frame rate, where the video frame rate refers to a video playing rate of the video file and indicates how many frames of pictures are played per second, for example, the video frame rate may be 30fps, the user may select a required target video frame rate to be 20fps, and the target video frame rate indicates a specific video frame rate that the user needs to output, and then the video segment obtained by video capture in the present invention may meet a requirement of the user on the target video frame rate.
In some embodiments of the present invention, the video capture instruction may further specifically include a target purpose selected by the user, that is, the user may instruct the terminal to output a video clip with a specific purpose, where the target purpose refers to an output path of the video file after being captured, for example, the video file may be an archived file or shared after being archived, and the target purpose indicates a specific purpose of the video clip that the user needs to output, and then the video clip obtained by video capture in the present invention may meet a requirement of the user for the target purpose.
It should be noted that, in the foregoing, various video parameters included in the video capture instruction received by the terminal in the present invention are described in detail, it can be understood that the video capture instruction in the present invention may further include one or more video parameters described above, which specifically requires a user to select which one or more video parameters, and which may be determined specifically by combining with an application scene.
102. And starting from the point that the playing time is the interception starting time, acquiring decoded video data corresponding to the video interception area in the video file currently being played according to the video interception instruction, and stopping acquiring the decoded video data corresponding to the video interception area in the video file currently being played until the playing time is the interception ending time.
In the embodiment of the invention, after a terminal receives a video interception instruction comprising an interception start time point, the terminal monitors a video file currently played in a playing screen of the terminal to obtain the progress of playing time, when the playing time reaches the interception start time point, the playing time currently played is the interception start time point, and from the interception start time point, the terminal obtains decoded video data corresponding to a video interception area in the video file currently played in real time.
The video playing process is a process of decoding a video file into original data and then displaying the original data, and the interception starting time point is used as a mark to obtain the video file currently being played, because the video file is decoded into decoded video data by a software decoder or a hardware decoder, according to the corresponding relation before and after decoding, the corresponding decoded video data can be correspondingly found from the video file currently being played, the decoded video data is usually in an original data format and consists of three components of Y (brightness), U (chroma) and V (chroma), and is usually used in the field of video compression, and the decoded video data can be YUV 420. For example, the time axis of the playing time shows that a video file of 4 minutes and 20 seconds is being played, if the capture start time point carried in the video capture instruction received by the terminal is 4 minutes and 22 seconds, when the time axis of the current playing time is shifted to 4 minutes and 22 seconds, the decoded video data corresponding to the video file being played at that time is obtained, and from 4 minutes and 22 seconds, the terminal needs to obtain the decoded video data corresponding to the video capture area in the video file being played at present all the time.
In the embodiment of the present invention, after receiving a video capture instruction including a capture start time point, a terminal starts a process of acquiring decoded video data from the capture start time point, and when a play time of the capture end time point is not reached, the terminal needs to continuously execute the process of acquiring decoded video data. And after the terminal receives the video file containing the interception ending time point, the terminal monitors the time axis of the playing time, and when the time axis is converted to the interception ending time point, the terminal does not acquire decoded video data corresponding to the video interception area in the currently played video file. It can be understood that, in the present invention, the decoded video data acquired by the terminal is the same as the video file in the playing terminal before and after the playing sequence.
In some embodiments of the present invention, the step 102, according to the video capture instruction, obtains decoded video data corresponding to the video capture area in the video file currently being played, and specifically may include the following steps:
a1, calculating the offset position between the video capture area and the playing interface of the current playing terminal;
a2, determining the coordinate mapping relation between the video capture area and the video image in the video file currently being played according to the calculated offset position;
and A3, reading the decoded video data corresponding to the video intercepting area from the frame buffer of the current playing terminal according to the coordinate mapping relation.
The terminal obtains a video capturing area defined by the user in the playing interface according to the adjustment condition of the video capturing frame by the user, so that the terminal can determine which part or all of the video pictures in the playing interface the user needs to capture. Referring to fig. 2, which is a schematic diagram illustrating an obtaining manner of a video capture area in an embodiment of the present invention, in fig. 2, an area a is a full screen area of a terminal, areas B to C are video playing areas, an area B is a playing interface, and an area C is a video capture area defined by a user. Of course, the position and area size of the area C may be adjusted by the user dragging the video capture box.
After the video capture area defined by the user is determined, step a1 is executed, and the terminal calculates the offset position between the video capture area and the playing interface of the current playing terminal, that is, the playing interface of the terminal is a rectangular frame, the video capture area is a rectangular frame, and the offset positions of the four corners of the video capture area relative to the four corners of the playing interface of the current playing terminal need to be calculated, so that the offset position between the video capture area and the playing interface of the current playing terminal can be determined. As shown in fig. 2, when the video file is played on the display screen, the video file may be played on the full screen, as shown in the area a in fig. 2, or may be played on a non-full screen, as shown in the area B in fig. 2. Any one of the regions B to a may be used. In any area, the user can draw a square area in the video playing area to be used as the video capturing area to be captured, and the offset positions of the defined area relative to the four corners of the video playing area can be calculated according to the pixel position relation.
After the offset position of the video capture area relative to the video playing interface is obtained, step a2 is executed, and according to the calculated offset position, the coordinate mapping relationship between the video capture area and the video image in the video file currently being played is determined. That is, the offset position of the video capture area calculated in step a1 with respect to the video playing interface, and there is a scaling relationship between the video playing interface and the original video image, and if the video playing interface is the same as the original video image, it is a one-to-one equal proportion, and if the user is operating the terminal, it is also possible that the original video image is enlarged or reduced to be displayed as the current video playing interface, and then the offset position of the calculated video capture area with respect to the video playing interface needs to be remapped to obtain the coordinate mapping relationship between the video capture area and the video image in the video file currently being played. For example, as shown in fig. 2, for the original video image coordinate mapping, since the areas B to C are uncertain, that is, the size of the video playing area is not necessarily equal to the size of the original video image, after the offset position is completed, the coordinate mapping relationship of the offset position in the original video image also needs to be calculated.
In some embodiments of the present invention, step a3 reads the decoded video data corresponding to the video capture area from the frame buffer of the current playing terminal according to the coordinate mapping relationship. Wherein, when the video file is being played in the current playing terminal, the video file has been decoded into decoded video data by a software decoder or a hardware decoder, the terminal reads the decoded video data from the frame buffer, then the terminal outputs the read decoded video data to a display screen to be displayed as a playing interface, the decoded video data corresponding to the video file being played at each playing time can be acquired in real time from the interception starting time point by means of the decoded video data stored in the frame buffer memory, after the decoded video data corresponding to the video file being played is acquired, carrying out proportion transformation according to the coordinate mapping relation to obtain decoded video data corresponding to the video intercepting area, and the decoded video data outside the video intercepting area in the playing interface is not in the range of the acquired decoded video.
It should be noted that, in some embodiments of the present invention, there may be other implementation manners for the terminal to obtain the decoded video data corresponding to the video capture area in the video file currently being played, for example, first obtain the source file corresponding to the video file currently being played, then decode the source file again, may generate decoded video data, perform scaling according to the coordinate mapping relationship, obtain the decoded video data corresponding to the video capture area, and obtain the decoded video data according to such manners.
In some embodiments of the present invention, if the video capture instruction further includes a target resolution selected by the user, step 103 may further include, starting from the capture end time point, before performing file format encoding on the obtained decoded video data according to the video capture instruction, the method for capturing a video segment provided by the present invention, further including the following steps:
b1, judging whether the original resolution and the target resolution of the video image in the video file corresponding to the acquired decoded video data are the same;
and B2, if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
Wherein, after the decoded video data is obtained in step 102, if the video capture instruction received by the terminal further includes a target resolution, it indicates that the user needs to specify the resolution of the captured video clip, the terminal may first obtain an original resolution of the video image from file header information of the video file, where the original resolution of the video image in the video file is a resolution displayed when the video file played in the display screen of the terminal is played, if the original resolution of the video image in the video file needs to be adjusted by the user, a resolution adjustment menu may be displayed on the display screen of the terminal, the user specifies the resolution of the captured video clip (i.e., the target resolution carried in the video capture instruction), after obtaining the original resolution of the video image in the video file, it is determined whether the target resolution is the same as the original resolution, and if the target resolution is the same as the original resolution, then, resolution conversion is not needed, and if the target resolution is different from the original resolution, resolution conversion is needed, specifically, a third party library (e.g., ffmpeg) may be called to implement resolution conversion, so as to obtain the obtained decoded video data including the target resolution, and then the file format encoding performed in the subsequent step 103 is the obtained decoded video data including the target resolution, which is described here, that is, the obtained decoded video data in the step 103 is specifically the obtained decoded video data including the target resolution.
In some embodiments of the present invention, if the video capture instruction further includes a target resolution selected by the user, in the application scenario where the foregoing steps a1 to A3 are performed, step 103 starts from a capture end time point, and before the file format encoding is performed on the obtained decoded video data according to the video capture instruction, the method for capturing a video segment provided by the present invention may further include the following steps:
c1, calculating a resolution mapping value by using the coordinate mapping relation and the original resolution of the video image in the video file corresponding to the acquired decoded video data;
c2, judging whether the resolution mapping value is the same as the target resolution or not;
and C3, if the resolution mapping value is not the same as the target resolution, zooming the video image in the video file corresponding to the acquired decoded video data to obtain the zoomed acquired decoded video data.
Wherein, after the decoded video data is obtained in step 102, if the video capture instruction received by the terminal further includes a target resolution, it indicates that the user needs to specify the resolution of the captured video clip, the terminal may first obtain an original resolution of the video image from file header information of the video file, where the original resolution of the video image in the video file is a resolution displayed when the video file played in the display screen of the terminal is played, and if the original resolution of the video image in the video file needs to be adjusted by the user, a resolution adjustment menu may be displayed on the display screen of the terminal, the user specifies the resolution of the captured video clip (i.e., the target resolution carried in the video capture instruction), and after obtaining the original resolution of the video image in the video file, the user adjusts the original video image in combination with the application scenes for executing steps a1 to A3, then, a coordinate mapping relationship may be generated according to the above steps a1 to A3, that is, the coordinate mapping relationship between the video capture area and the video image in the currently playing video file is calculated by combining the coordinate mapping relationship with the original resolution, and then it is determined whether the target resolution is the same as the resolution mapping value, if the target resolution is the same as the resolution mapping value, then it is not necessary to scale the video image in the video file, if the target resolution is not the same as the resolution mapping value, then it is necessary to scale the video image in the video file, specifically, a third party library (e.g., ffmpeg) may be called to implement the scaling processing of the video image, and the scaled decoded video data is obtained, and then the decoded video data obtained in the subsequent step 103 in the file format coding is the scaled decoded video data obtained in the subsequent step 103, that is the decoded video data obtained in the step 103, that is specifically, the decoded video data obtained in the step 103 is the scaled decoded video data obtained in the step Frequency data.
In some embodiments of the present invention, if the video capture instruction further includes a target video format selected by the user, step 103 may further include, starting from the capture end time point, before performing file format encoding on the obtained decoded video data according to the video capture instruction, the method for capturing a video segment provided by the present invention, the following steps:
d1, judging whether the original video format of the video file corresponding to the obtained decoded video data is the same as the target video format;
and D2, if the original video format is different from the target video format, converting the video format of the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target video format.
Wherein, after the decoded video data is obtained in step 102, if the video capture instruction received by the terminal further includes a target video format, it indicates that the user needs to specify the video format of the captured video clip, the terminal may first obtain an original video format of the video image from file header information of the video file, the original video format of the video image in the video file is the video format when the video file played in the display screen of the terminal is played, if the user needs to adjust the original video format of the video image in the video file, a video format adjustment menu may be displayed on the display screen of the terminal, the user specifies the video format of the captured video clip (i.e. the target video format carried in the video capture instruction), and after obtaining the original video format of the video image in the video file, it is determined whether the target video format is the same as the original video format or not, if the target video format is the same as the original video format, video format conversion is not required, if the target video format is different from the original video format, the video format conversion is required, specifically, a third party library (e.g., ffmpeg) may be called to implement the video format conversion to obtain the acquired decoded video data including the target video format, and the file format encoding performed in the subsequent step 103 is the acquired decoded video data including the target video format described here, that is, the acquired decoded video data in the step 103 is specifically the acquired decoded video data including the target video format.
In some embodiments of the present invention, if the video capture instruction further includes a target video quality selected by the user, step 103 may further include, starting from the capture end time point, before performing file format encoding on the obtained decoded video data according to the video capture instruction, the method for capturing a video segment provided by the present invention, the following steps:
e1, judging whether the original video quality of the video file corresponding to the obtained decoded video data is the same as the target video quality;
e2, if the original video quality is different from the target video quality, adjusting the video quality of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video quality.
Wherein, after the decoded video data is obtained in step 102, if the video capture instruction received by the terminal further includes a target video quality, it indicates that the user needs to specify the video quality of the captured video clip, the terminal may first obtain an original video quality of the video image from file header information of the video file, the original video quality of the video image in the video file is the video quality displayed when the video file played in the display screen of the terminal is played, if the user needs to adjust the original video quality of the video image in the video file, a video quality adjustment menu may be displayed on the display screen of the terminal, the user specifies the video quality of the captured video clip (i.e. the target video quality carried in the video capture instruction), and after obtaining the original video quality of the video image in the video file, it is determined whether the target video quality is the same as the original video quality, if the target video quality is the same as the original video quality, no adjustment of the video quality is needed, for example, if the video quality specifically indicates the number of frames between key frames in a group of pictures of the video, the video quality indicates quantization coefficients of the video, the video quality indicates the configuration of the video, and if the target video quality is the same as the original video quality, the video parameters are the same. If the target video quality is different from the original video quality, the video quality needs to be adjusted, specifically, a third party library (e.g., ffmpeg) may be called to implement conversion of the video quality to obtain the obtained decoded video data including the target video quality, and the file format encoding performed in the subsequent step 103 is the obtained decoded video data including the target video quality described herein, that is, the obtained decoded video data in the step 103 is specifically the obtained decoded video data including the target video quality.
In some embodiments of the present invention, if the video capture instruction further includes a target video frame rate selected by the user, step 103 may further include, starting from the capture end time point, before performing file format encoding on the obtained decoded video data according to the video capture instruction, the method for capturing a video segment provided by the present invention, further including the following steps:
f1, judging whether the original video frame rate of the video file corresponding to the obtained decoded video data is the same as the target video frame rate;
and F2, if the original video frame rate is different from the target video frame rate, converting the video frame rate of the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target video frame rate.
Wherein, after the decoded video data is obtained in step 102, if the video capture instruction received by the terminal further includes the target video frame rate, it indicates that the user needs to specify the video frame rate of the captured video clip, the terminal may first obtain the original video frame rate of the video image from the file header information of the video file, the original video frame rate of the video image in the video file is the video frame rate displayed when the video file played in the display screen of the terminal is played, if the user needs to adjust the original video frame rate of the video image in the video file, a video frame rate adjustment menu may be displayed on the display screen of the terminal, the user specifies the video frame rate of the captured video clip (i.e. the target video frame rate carried in the video capture instruction), and after obtaining the original video frame rate of the video image in the video file, it is determined whether the target video frame rate is the same as the original video frame rate, if the target video frame rate is the same as the original video frame rate, the video frame rate does not need to be converted, if the target video frame rate is different from the original video frame rate, the video frame rate needs to be converted, specifically, a third party library (e.g., ffmpeg) may be called to implement the conversion of the video frame rate, to obtain the obtained decoded video data including the target video frame rate, and the file format encoding performed in the subsequent step 103 is the obtained decoded video data including the target video frame rate, that is, the obtained decoded video data in the step 103 is specifically the obtained decoded video data including the target video frame rate.
103. And starting from the interception ending time point, carrying out file format coding on the acquired decoded video data according to the video interception instruction, and generating a video segment intercepted from the video file.
In the embodiment of the present invention, in the foregoing step 102, a plurality of decoded video data from the interception start time point to the interception end time point are acquired, when the interception end time point arrives, the terminal stops acquiring the decoded video data, the terminal may start from the interception end time point and already acquire the decoded video data corresponding to the video file to be intercepted, and then package and encapsulate the acquired decoded video data, so that the decoded video data acquired in step 102 is packaged into a file form, that is, the acquired decoded video data may be subjected to file format encoding, so that a video segment that the user needs to intercept is acquired, and the generated video segment is acquired from the video file played in the play interface of the terminal.
In some embodiments of the present invention, if the video capture instruction further includes a target file format selected by the user, step 103 performs file format encoding on the obtained decoded video data according to the video capture instruction from the capture end time point, which may specifically include the following steps:
g1, encoding the obtained decoded video data into a video clip meeting the target file format by using a file synthesizer, and carrying header information in the video clip, wherein the header information comprises: attribute information of the video clip.
After the decoded video data is acquired in step 102, if a video capture instruction received by the terminal further includes a target file format, it indicates that a user needs to specify a file format of a captured video clip, after the decoded video data is acquired in step 102, the acquired decoded video data may be specifically encoded into a video clip satisfying the target file format by using a file synthesizer, specifically, a third party library (e.g., ffmpeg) may be called to implement conversion of the file format, so as to obtain a video clip satisfying the target file format, when the file synthesizer is used, file header information is carried in the generated video clip, and the file header information carries basic feature information of the video clip, for example, the file header information includes: attribute information of the video clip.
104. And outputting the video clip according to the target purpose.
In the embodiment of the present invention, the video capture instruction further includes a target purpose selected by the user, then step 103 performs file format encoding on the obtained decoded video data according to the video capture instruction from the capture end time point, and after generating a video segment captured from the video file, the captured video segment also needs to be output according to the selection of the user, so that the terminal can meet the requirement of the user on video capture.
That is to say, after the terminal intercepts the video clip from the video file, the video clip can be output to a specific purpose application according to the needs of the user, for example, the user archives the intercepted video clip, or shares the video clip to the QQ space or the wechat after the archives, and the specific purpose of the video clip that the user needs to output is indicated by the target purpose, so that the video clip intercepted by the video file in the invention can meet the requirements of the user on the target purpose.
As can be seen from the description of the embodiment of the present invention in the above embodiment, a video capture instruction is first received, where the video capture instruction includes: intercepting a starting time point and an intercepting ending time point, then starting from the intercepting starting time point of the playing time, acquiring decoded video data corresponding to a video intercepting area in a video file currently being played according to a video intercepting instruction until the playing time is the intercepting ending time point, stopping acquiring the decoded video data corresponding to the video intercepting area in the video file currently being played, starting from the intercepting ending time point, carrying out file format coding on the acquired decoded video data according to the video intercepting instruction, and generating a video segment intercepted from the video file. After a playing interface in the terminal starts playing a video file, decoded video data corresponding to a video intercepting area in the currently playing video file can be obtained after the playing time reaches an intercepting starting time point, the decoded video data corresponding to the video intercepting area in the currently playing video file still needs to be continuously obtained before the intercepting finishing time point is not reached, a plurality of decoded video data can be obtained according to a video intercepting instruction, and after the intercepting finishing time point is reached, file format coding is carried out on the obtained decoded video data according to the video intercepting instruction, so that a video segment intercepted from the video file can be generated. In the invention, the video segment needing to be intercepted is obtained by acquiring the decoded video data corresponding to the video file being played and then carrying out file format coding on the decoded video data, rather than obtaining the video segment by capturing a plurality of video images to combine.
In order to better understand and implement the above-mentioned schemes of the embodiments of the present invention, the following description specifically illustrates corresponding application scenarios.
Taking the example that the user watches the video by using the QQ browser, when the user encounters a favorite video frame, the user can select to capture the whole video frame segment or a part of the video frame segment, create a video frame segment without audio, and store the video frame segment locally or share the video frame segment with friends. Please refer to fig. 3, which is a schematic diagram illustrating a video clip capturing process according to the present invention.
S1 calculation of offset position of video capture area
When the video file is played on the display screen of the terminal, the video file may be played on a full screen, as shown in an area a in fig. 2, or may be played on a non-full screen, as shown in an area B in fig. 2. Any one of the regions B to a may be used. Regardless of the area, the user can draw a square area in the video playing area to be used as the video capturing area to be captured, and first needs to calculate the offset positions of the four corners of the drawn area relative to the video playing area.
S2 coordinate mapping of original video image
Since the areas B to C are uncertain, that is, the size of the video playing area is not necessarily equal to the size of the original video image, after the offset position is completed, the coordinate mapping relationship of the offset position in the original video image needs to be calculated.
After S1 and S2 are completed, the following menu selections P1, P2 and P3 are performed, and a menu is required to be provided on the display screen of the terminal for the user to select, specifically, the following menu selections are included:
p1, use selection: it is determined whether the captured video segment is to be archived file only or to be shared after archiving.
P2, configuration selection: resolution, video format, video quality, file format, video frame rate, and video capture duration (i.e., capture start time point, capture end time point).
P3, mode selection: it is determined whether a single video clip or multiple video clips need to be intercepted.
S3 processing of decoded video data
When the user performs the area demarcating operation of S1, the processing is started from the current time point by default. The process of video playing is a process of decoding a video file into original data and then displaying the original data, and the original data is usually in YUV420 format. The video segment is synthesized from the original data, so that the link of re-decoding the source file can be saved, the processor resource of the terminal can be saved, and the electric quantity of the terminal can be saved.
As shown in fig. 4, a schematic view of a processing flow of decoded video data according to an embodiment of the present invention is provided, where the process may specifically include the following steps:
and m1, acquiring the target resolution, the target video format and the target video quality selected by the user, the target file format, the target video frame rate and the intercepted video length from the video intercepting instruction. According to different specific configurations, the following two different processes are divided, such as Q1 and Q2, which are described separately below.
Q1, when the following condition is satisfied: the target resolution is the same as the original resolution, the target video frame rate is the same as the original frame rate, the target video format is the same as the original video format (namely, the target encoder and the original decoder adopt the same compressed video protocol), and the target video quality is the same as the original video quality. When the conditions are met, a Q1 judgment process can be selected, file format coding is carried out on the obtained decoded video data according to a video interception instruction, and a video segment intercepted from a video file is generated, wherein the process is equivalent to a copy mode. The Q1 process does not require decompressing the video file but merely repackages the decoded video data into a new file format.
Specifically, the flow under the Q1 process is as follows:
step m3, according to the target file format, open the file synthesizer, and generate the header information, where the header information contains some basic features of the video segment, such as the attribute of the video segment, and the adopted video encoding format.
And m7, calling a file synthesizer, and carrying out file format coding on the coded video data according to a rule to obtain a video clip, wherein the rule refers to that if the target file format selected by the user is an mp4 file, the finally coded video clip is generated according to the video organization mode of the mp4 file.
Q2, wherein any one of the conditions of Q1 is not satisfied, namely at least one of the following conditions is satisfied: the target resolution is different from the original resolution, the target video frame rate is different from the original frame rate, the target video format is different from the original video format (namely, the target encoder and the original decoder adopt different compressed video protocols), the target video quality is different from the original video quality, and a Q2 judgment process is executed.
Specifically, the flow under the Q2 process is as follows:
step m2, according to the video format to be coded, the coder is opened.
And step m3, opening a file synthesizer according to the file format and generating file header information.
And m4, obtaining decoded video data from the decoding link of the current playing process.
And m5, determining whether to perform zooming processing according to the information obtained in the step m1, for example, a user demarcates a video intercepting area, compares the current player range of the video intercepting area to obtain a proportional relation, calculates a size by combining the proportional relation with the original resolution, and if the size is not the same as the target resolution, zooming processing is required to be performed so that the resolution of the output video segment meets the requirement. If not, no scaling is required.
And step m6, calling an encoder to encode the video format of the encoded video data according to the target video format.
And step m7, calling a file synthesizer, and coding the coded video data according to the target file format to generate a video segment.
It should be noted that, in the present invention, the flow of processing the encoded video data is synchronized with the playing process of the video file, and if a plurality of video segments are synthesized, the above Q1 or Q2 process is repeated.
S4 output of video clip
When the video segments are synthesized, the user is prompted to succeed. According to the selection mode of P1, if archiving is performed, a third-party application is called to open the video folder. If the sharing is carried out, the third-party application is called to carry out sharing, and the sharing can be carried out in a mode such as but not limited to WeChat and QQ.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
To facilitate a better implementation of the above-described aspects of embodiments of the present invention, the following also provides relevant means for implementing the above-described aspects.
Referring to fig. 5-a, an apparatus 500 for capturing a video segment according to an embodiment of the present invention includes: a receiving module 501, a video data acquiring module 502, a file encoding module 503, and a video segment outputting module 504, wherein,
a receiving module 501, configured to receive a video capture instruction sent by a user through a current playing terminal, where the video capture instruction includes: the video capturing method comprises the steps that the user determines a capturing starting time point and a capturing ending time point of a video to be captured, a video capturing area defined in a playing interface of the current playing terminal by the user and a target use selected by the user;
a video data obtaining module 502, configured to obtain, according to the video capture instruction, decoded video data corresponding to a video capture area in a video file currently being played from the time when the playing time is the capture start time point, and stop obtaining the decoded video data corresponding to the video capture area in the video file currently being played until the playing time is the capture end time point;
and a file encoding module 503, configured to perform file format encoding on the obtained decoded video data according to the video capture instruction from the capture end time point, and generate a video segment captured from the video file.
And a video segment output module 504, configured to, starting from the capture end time point, perform file format coding on the obtained decoded video data according to the video capture instruction by the file coding module 503, generate a video segment captured from the video file, and output the video segment according to a target purpose.
In some embodiments of the present invention, referring to fig. 5-b, the video data obtaining module 502 comprises:
a position calculating unit 5021, configured to calculate an offset position between the video capture area and the playing interface of the current playing terminal;
a mapping relation determining unit 5022, configured to determine a coordinate mapping relation between the video capture area and a video image in a video file currently being played according to the calculated offset position;
and the video data reading unit 5023 is used for reading the decoded video data corresponding to the video capture area from the frame buffer of the current playing terminal according to the coordinate mapping relationship.
In some embodiments of the present invention, if the video capture instruction further includes a target file format selected by a user, the file encoding module 503 is configured to encode, by using a file synthesizer, the obtained decoded video data into a video segment meeting the target file format, and carry file header information in the video segment, where the file header information includes: attribute information of the video clip.
In some embodiments of the present invention, referring to fig. 5-c, if the video capturing instruction further includes a target resolution selected by a user, the video clip capturing apparatus 500 further includes: a resolution coordination module 505, configured to determine, by the file encoding module 503, from the interception end time point, whether an original resolution of a video image in a video file corresponding to the obtained decoded video data is the same as the target resolution before performing file format encoding on the obtained decoded video data according to the video interception instruction; and if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
In some embodiments of the present invention, if the video capture instruction further includes a target resolution selected by a user, the video clip capturing apparatus 500 further includes: a resolution coordination module 505, configured to calculate, by the file encoding module, a resolution mapping value using the coordinate mapping relationship and an original resolution of a video image in a video file corresponding to the obtained decoded video data before performing file format encoding on the obtained decoded video data according to the video capture instruction from the capture end time point; judging whether the resolution mapping value is the same as the target resolution or not; if the resolution mapping value is different from the target resolution, zooming the video image in the video file corresponding to the obtained decoded video data to obtain the zoomed obtained decoded video data.
In some embodiments of the present invention, referring to fig. 5-d, if the video capture command further includes a target video format selected by the user, the video clip capturing apparatus 500 further includes: a video format coordination module 506, configured to judge, by the file encoding module 503, from the interception end time point, whether an original video format of a video file corresponding to the obtained decoded video data is the same as the target video format before performing file format encoding on the obtained decoded video data according to the video interception instruction; and if the original video format is different from the target video format, converting the video format of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video format.
In some embodiments of the present invention, referring to fig. 5-e, if the video capturing instruction further includes a target video quality selected by the user, the apparatus 500 for capturing a video segment further includes: a video quality coordination module 507, configured to, from the interception end time point, judge whether an original video quality of a video file corresponding to the obtained decoded video data is the same as the target video quality before the file encoding module 503 performs file format encoding on the obtained decoded video data according to the video interception instruction; if the original video quality is different from the target video quality, adjusting the video quality of a video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video quality.
In some embodiments of the present invention, referring to fig. 5-f, if the video capturing instruction further includes a target video frame rate selected by the user, the apparatus 500 for capturing the video segment further includes: a video frame rate coordination module 508, configured to, from the capture end time point, determine, by the file encoding module, whether an original video frame rate of a video file corresponding to the obtained decoded video data is the same as the target video frame rate before performing file format encoding on the obtained decoded video data according to the video capture instruction; if the original video frame rate is different from the target video frame rate, converting the video frame rate of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video frame rate.
It can be known from the above description of the embodiments of the present invention that, when a user sends a video capture instruction through a current playing terminal, the user first receives the video capture instruction, where the video capture instruction may include a capture start time point and a capture end time point, a video capture area defined by the user, and a target purpose selected by the user, after a playing interface in the terminal starts playing a video file, when the playing time reaches the capture start time point, decoded video data corresponding to the video capture area in the video file currently being played may be obtained, before the capture end time point is not reached, the decoded video data corresponding to the video capture area in the video file currently being played still needs to be obtained, a plurality of decoded video data may be obtained according to the video capture instruction, after the capture end time point is reached, the obtained decoded video data is subjected to file format encoding according to the video capture instruction, therefore, the video clip cut out from the video file can be generated, and the cut-out video clip can be output according to the target purpose selected by the user after being generated. In the invention, the video segment needing to be intercepted is obtained by acquiring the decoded video data corresponding to the video file being played and then carrying out file format coding on the decoded video data, rather than obtaining the video segment by capturing a plurality of video images to combine.
As shown in fig. 6, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present invention. The terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of sales), a vehicle-mounted computer, etc., taking the terminal as the mobile phone as an example:
fig. 6 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present invention. Referring to fig. 6, the handset includes: radio Frequency (RF) circuit 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuit 660, wireless fidelity (WiFi) module 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the handset configuration shown in fig. 6 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 6:
the RF circuit 610 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 680; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 610 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 or near the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 640 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 631 can cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 6, the touch panel 631 and the display panel 641 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 650, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 641 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuit 660, speaker 661, and microphone 662 can provide an audio interface between a user and a cell phone. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signals into electrical signals, which are received by the audio circuit 660 and converted into audio data, which are processed by the audio data output processor 680 and then transmitted via the RF circuit 610 to, for example, another cellular phone, or output to the memory 620 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 670, and provides wireless broadband Internet access for the user. Although fig. 6 shows the WiFi module 670, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 680 is a control center of the mobile phone, and connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby performing overall monitoring of the mobile phone. Optionally, processor 680 may include one or more processing units; preferably, the processor 680 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.
The handset also includes a power supply 690 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 680 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present invention, the processor 680 included in the terminal further has a flow for controlling and executing the above intercepting method of the video clip executed by the terminal.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and may also be implemented by special hardware including special integrated circuits, special CPUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, the implementation of a software program is a more preferable embodiment for the present invention. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
In summary, the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the above embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the above embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (16)

1. A method for intercepting a video segment, comprising:
receiving a video interception instruction sent by a user through a current playing terminal, wherein the video interception instruction comprises: the video capturing method comprises the steps that the user determines a capturing starting time point and a capturing ending time point of a video to be captured, a video capturing area defined in a playing interface of the current playing terminal by the user and a target use selected by the user; a zooming relation exists between the playing interface and the original video image;
calculating the offset position between the video intercepting area and the playing interface of the current playing terminal from the moment that the playing time is the intercepting starting time point;
determining a coordinate mapping relation between the video capturing area and a video image in a video file which is currently played according to the calculated offset position and the scaling relation between the playing interface and the original video image;
performing scaling transformation according to the coordinate mapping relation, reading decoded video data corresponding to the video intercepting area from a frame cache of the current playing terminal, wherein the decoded video data corresponding to the area outside the video intercepting area in the playing interface is not in the range of the obtained decoded video data until the playing time is the intercepting finishing time point, and stopping obtaining the decoded video data corresponding to the video intercepting area in the video file currently being played;
starting from the interception ending time point, carrying out file format coding on the obtained decoded video data according to the video interception instruction, and generating a video segment intercepted from the video file;
outputting the video clip according to the target use.
2. The method according to claim 1, wherein if the video capture instruction further includes a target file format selected by a user, the performing, from the capture end time point, file format encoding on the obtained decoded video data according to the video capture instruction includes:
using a file synthesizer to encode the obtained decoded video data into a video segment meeting the target file format, and carrying file header information in the video segment, wherein the file header information comprises: attribute information of the video clip.
3. The method according to any one of claims 1 to 2, wherein if the video capture instruction further includes a target resolution selected by a user, the method further includes, before performing file format encoding on the acquired decoded video data according to the video capture instruction from the capture end time point:
judging whether the original resolution of the video image in the video file corresponding to the obtained decoded video data is the same as the target resolution or not;
and if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
4. The method according to claim 1, wherein if the video capture instruction further includes a target resolution selected by a user, starting from the capture end time point, before performing file format encoding on the obtained decoded video data according to the video capture instruction, the method further includes:
calculating a resolution mapping value by using the coordinate mapping relation and the original resolution of the video image in the video file corresponding to the acquired decoded video data;
judging whether the resolution mapping value is the same as the target resolution or not;
if the resolution mapping value is different from the target resolution, zooming the video image in the video file corresponding to the obtained decoded video data to obtain the zoomed obtained decoded video data.
5. The method according to any one of claims 1 to 2, wherein if the video capture instruction further includes a target video format selected by a user, the method further includes, from the capture end time point, before performing file format encoding on the acquired decoded video data according to the video capture instruction:
judging whether the original video format of the video file corresponding to the obtained decoded video data is the same as the target video format;
and if the original video format is different from the target video format, converting the video format of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video format.
6. The method according to any one of claims 1 to 2, wherein if the video capture instruction further includes a target video quality selected by a user, the method further includes, from the capture end time point, before performing file format encoding on the acquired decoded video data according to the video capture instruction:
judging whether the original video quality of the video file corresponding to the obtained decoded video data is the same as the target video quality;
if the original video quality is different from the target video quality, adjusting the video quality of a video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video quality.
7. The method according to any one of claims 1 to 2, wherein if the video capture instruction further includes a target video frame rate selected by a user, the method further includes, before performing file format encoding on the acquired decoded video data according to the video capture instruction from the capture end time point:
judging whether the original video frame rate of the video file corresponding to the obtained decoded video data is the same as the target video frame rate;
if the original video frame rate is different from the target video frame rate, converting the video frame rate of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video frame rate.
8. An apparatus for intercepting a video clip, comprising:
the receiving module is used for receiving a video intercepting instruction sent by a user through a current playing terminal, and the video intercepting instruction comprises: the video capturing method comprises the steps that the user determines a capturing starting time point and a capturing ending time point of a video to be captured, a video capturing area defined in a playing interface of the current playing terminal by the user and a target use selected by the user; a zooming relation exists between the playing interface and the original video image;
the video data acquisition module is used for calculating the offset position between the video interception area and the playing interface of the current playing terminal from the moment that the playing time is the interception starting time point; determining a coordinate mapping relation between the video capturing area and a video image in a video file which is currently played according to the calculated offset position and the scaling relation between the playing interface and the original video image; performing scaling transformation according to the coordinate mapping relation, reading decoded video data corresponding to the video intercepting area from a frame cache of the current playing terminal, wherein the decoded video data corresponding to the area outside the video intercepting area in the playing interface is not in the range of the obtained decoded video data until the playing time is the intercepting finishing time point, and stopping obtaining the decoded video data corresponding to the video intercepting area in the video file currently being played;
the file coding module is used for carrying out file format coding on the obtained decoded video data according to the video intercepting instruction from the intercepting ending time point to generate a video segment intercepted from the video file;
and the video clip output module is used for outputting the video clip according to the target purpose.
9. The apparatus according to claim 8, wherein if the video capture instruction further includes a target file format selected by a user, the file encoding module is specifically configured to encode the obtained decoded video data into a video segment that satisfies the target file format using a file synthesizer, and carry file header information in the video segment, where the file header information includes: attribute information of the video clip.
10. The apparatus according to any one of claims 8 to 9, wherein if the video capture command further comprises a target resolution selected by a user, the apparatus further comprises: a resolution coordination module, configured to, starting from the interception end time point, judge, by the file encoding module, whether an original resolution of a video image in a video file corresponding to the obtained decoded video data is the same as the target resolution before performing file format encoding on the obtained decoded video data according to the video interception instruction; and if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
11. The apparatus of claim 8, wherein if the video capture command further includes a target resolution selected by the user, the apparatus further comprises: a resolution coordination module, configured to calculate, by the file encoding module, a resolution mapping value using the coordinate mapping relationship and an original resolution of a video image in a video file corresponding to the obtained decoded video data before performing file format encoding on the obtained decoded video data according to the video capture instruction from the capture end time point; judging whether the resolution mapping value is the same as the target resolution or not; if the resolution mapping value is different from the target resolution, zooming the video image in the video file corresponding to the obtained decoded video data to obtain the zoomed obtained decoded video data.
12. The apparatus according to any one of claims 8 to 9, wherein if the video capture command further includes a target video format selected by a user, the apparatus further comprises: the video format coordination module is used for judging whether the original video format of the video file corresponding to the obtained decoded video data is the same as the target video format or not before the file coding module performs file format coding on the obtained decoded video data according to the video interception instruction from the interception ending time point; and if the original video format is different from the target video format, converting the video format of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video format.
13. The apparatus according to any one of claims 8 to 9, wherein if the video capture command further includes a target video quality selected by a user, the apparatus further comprises: the video quality coordination module is used for judging whether the original video quality of a video file corresponding to the obtained decoded video data is the same as the target video quality or not before the file coding module performs file format coding on the obtained decoded video data according to the video interception instruction from the interception ending time point; if the original video quality is different from the target video quality, adjusting the video quality of a video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video quality.
14. The apparatus according to any one of claims 8 to 9, wherein if the video capture instruction further includes a target video frame rate selected by a user, the apparatus further comprises: a video frame rate coordination module, configured to, starting from the capture end time point, and before performing file format coding on the obtained decoded video data according to the video capture instruction, determine whether an original video frame rate of a video file corresponding to the obtained decoded video data is the same as the target video frame rate; if the original video frame rate is different from the target video frame rate, converting the video frame rate of the video file corresponding to the obtained decoded video data to obtain the obtained decoded video data containing the target video frame rate.
15. A computer-readable storage medium, wherein a software program is stored in the computer-readable storage medium; the software program, when executed, implements a method of intercepting a video clip according to any of claims 1 to 7.
16. A terminal device comprising a memory and a processor;
the memory is used for storing a software program;
the processor is configured to run the software program to perform the method of intercepting a video segment according to any one of claims 1 to 7.
CN201510448280.7A 2015-07-27 2015-07-27 Video clip intercepting method and device Active CN106412702B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201510448280.7A CN106412702B (en) 2015-07-27 2015-07-27 Video clip intercepting method and device
PCT/CN2016/085994 WO2017016339A1 (en) 2015-07-27 2016-06-16 Video sharing method and device, and video playing method and device
MYPI2017704144A MY190923A (en) 2015-07-27 2016-06-16 Video sharing method and device, and video playing method and device
US15/729,439 US10638166B2 (en) 2015-07-27 2017-10-10 Video sharing method and device, and video playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510448280.7A CN106412702B (en) 2015-07-27 2015-07-27 Video clip intercepting method and device

Publications (2)

Publication Number Publication Date
CN106412702A CN106412702A (en) 2017-02-15
CN106412702B true CN106412702B (en) 2020-06-05

Family

ID=58008580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510448280.7A Active CN106412702B (en) 2015-07-27 2015-07-27 Video clip intercepting method and device

Country Status (1)

Country Link
CN (1) CN106412702B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930750A (en) * 2020-08-28 2020-11-13 支付宝(杭州)信息技术有限公司 Method and device for carrying out evidence storage on evidence obtaining process video clip

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993228A (en) * 2017-03-02 2017-07-28 北京潘达互娱科技有限公司 Method for processing video frequency and device
CN106998494B (en) * 2017-04-24 2021-02-05 腾讯科技(深圳)有限公司 Video recording method and related device
CN107426400A (en) * 2017-04-27 2017-12-01 福建中金在线信息科技有限公司 A kind of terminal plays image-capture method and system
CN107295416B (en) * 2017-05-05 2019-11-22 中广热点云科技有限公司 The method and apparatus for intercepting video clip
CN107682744B (en) * 2017-09-29 2021-01-08 惠州Tcl移动通信有限公司 Video clip output method, storage medium and mobile terminal
CN107801106B (en) * 2017-10-24 2019-10-15 维沃移动通信有限公司 A kind of video clip intercept method and electronic equipment
CN107864411A (en) * 2017-10-31 2018-03-30 广东小天才科技有限公司 A kind of picture output method and terminal device
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN109936763B (en) * 2017-12-15 2022-07-01 腾讯科技(深圳)有限公司 Video processing and publishing method
CN110022496A (en) * 2018-01-09 2019-07-16 北京小度互娱科技有限公司 Video cutting method, device, system, computer equipment and storage medium
CN109194979B (en) * 2018-10-30 2022-06-17 湖南天鸿瑞达集团有限公司 Audio and video processing method and device, mobile terminal and readable storage medium
CN110446096A (en) * 2019-08-15 2019-11-12 天脉聚源(杭州)传媒科技有限公司 Video broadcasting method, device and storage medium a kind of while recorded
CN110798727A (en) * 2019-10-28 2020-02-14 维沃移动通信有限公司 Video processing method and electronic equipment
CN110839181A (en) * 2019-12-04 2020-02-25 湖南快乐阳光互动娱乐传媒有限公司 Method and system for converting video content into gif based on B/S architecture
CN112822544B (en) * 2020-12-31 2023-10-20 广州酷狗计算机科技有限公司 Video material file generation method, video synthesis method, device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103873920A (en) * 2014-03-18 2014-06-18 深圳市九洲电器有限公司 Program browsing method and system and set top box
CN104616241A (en) * 2014-07-24 2015-05-13 腾讯科技(北京)有限公司 Video screen-shot method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510533A (en) * 2011-12-12 2012-06-20 深圳市九洲电器有限公司 Method, device and set-top box for eliminating video capture delay
CN102802079B (en) * 2012-08-24 2016-08-17 广东欧珀移动通信有限公司 A kind of video preview segment generating method of media player
CN103152654A (en) * 2013-03-15 2013-06-12 杭州智屏软件有限公司 Low-latency video fragment interception technology
CN104079981A (en) * 2013-03-25 2014-10-01 联想(北京)有限公司 Data processing method and data processing device
CN103414751B (en) * 2013-07-16 2016-08-17 广东工业大学 A kind of PC screen content sharing/interaction control method
US10999637B2 (en) * 2013-08-30 2021-05-04 Adobe Inc. Video media item selections
CN103747362B (en) * 2013-12-30 2017-02-08 广州华多网络科技有限公司 Method and device for cutting out video clip
CN104159151B (en) * 2014-08-06 2017-12-05 哈尔滨工业大学深圳研究生院 A kind of device and method for carrying out video intercepting on OTT boxes and handling
CN104159161B (en) * 2014-08-25 2018-05-18 广东欧珀移动通信有限公司 The localization method and device of video image frame
CN104618741A (en) * 2015-03-02 2015-05-13 浪潮软件集团有限公司 Information pushing system and method based on video content

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103873920A (en) * 2014-03-18 2014-06-18 深圳市九洲电器有限公司 Program browsing method and system and set top box
CN104616241A (en) * 2014-07-24 2015-05-13 腾讯科技(北京)有限公司 Video screen-shot method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930750A (en) * 2020-08-28 2020-11-13 支付宝(杭州)信息技术有限公司 Method and device for carrying out evidence storage on evidence obtaining process video clip

Also Published As

Publication number Publication date
CN106412702A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106412702B (en) Video clip intercepting method and device
CN106412691B (en) Video image intercepting method and device
CN106412687B (en) Method and device for intercepting audio and video clips
CN109218731B (en) Screen projection method, device and system of mobile equipment
US10638166B2 (en) Video sharing method and device, and video playing method and device
CN110324622B (en) Video coding rate control method, device, equipment and storage medium
CN110636375B (en) Video stream processing method and device, terminal equipment and computer readable storage medium
CN108235058B (en) Video quality processing method, storage medium and terminal
CN106412681B (en) Live bullet screen video broadcasting method and device
US10986332B2 (en) Prediction mode selection method, video encoding device, and storage medium
JP7085014B2 (en) Video coding methods and their devices, storage media, equipment, and computer programs
CN111010576B (en) Data processing method and related equipment
CN109729384B (en) Video transcoding selection method and device
CN109168013B (en) Method, device and equipment for extracting frame and computer readable storage medium
CN109196865B (en) Data processing method, terminal and storage medium
CN108038825B (en) Image processing method and mobile terminal
CN106844580B (en) Thumbnail generation method and device and mobile terminal
CN108881920B (en) Method, terminal and server for transmitting video information
CN109121008B (en) Video preview method, device, terminal and storage medium
KR20140092517A (en) Compressing Method of image data for camera and Electronic Device supporting the same
CN108460769B (en) image processing method and terminal equipment
KR20190109476A (en) Video encoding methods, apparatus, and devices, and storage media
CN107396178B (en) Method and device for editing video
CN109474833B (en) Network live broadcast method, related device and system
CN110996117A (en) Video transcoding method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221202

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.