CN106412691B - Video image intercepting method and device - Google Patents

Video image intercepting method and device Download PDF

Info

Publication number
CN106412691B
CN106412691B CN201510446684.2A CN201510446684A CN106412691B CN 106412691 B CN106412691 B CN 106412691B CN 201510446684 A CN201510446684 A CN 201510446684A CN 106412691 B CN106412691 B CN 106412691B
Authority
CN
China
Prior art keywords
video
image
file
video data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510446684.2A
Other languages
Chinese (zh)
Other versions
CN106412691A (en
Inventor
陈俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510446684.2A priority Critical patent/CN106412691B/en
Priority to MYPI2017704144A priority patent/MY190923A/en
Priority to PCT/CN2016/085994 priority patent/WO2017016339A1/en
Publication of CN106412691A publication Critical patent/CN106412691A/en
Priority to US15/729,439 priority patent/US10638166B2/en
Application granted granted Critical
Publication of CN106412691B publication Critical patent/CN106412691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention discloses a video image interception method and a video image interception device, which are used for improving the video image interception processing efficiency. The video image intercepting method provided by the embodiment of the invention comprises the following steps: receiving an image interception instruction sent by a user through a current playing terminal, wherein the image interception instruction comprises: intercepting a video time point, an image intercepting area defined by the user in a playing interface of the current playing terminal and a target use selected by the user; acquiring decoded video data corresponding to an image interception area in a video file played at the video interception time point according to the image interception instruction; carrying out file format coding on the obtained decoded video data according to the image intercepting instruction, and generating a video screenshot obtained by intercepting the video file; outputting the video clip according to the target use.

Description

Video image intercepting method and device
Technical Field
The invention relates to the technical field of computers, in particular to a video image intercepting method and device.
Background
In recent years, multimedia information technology has been rapidly developed, users are more and more accustomed to playing videos by holding a terminal with hands, and when a user is interested in a certain picture of a video in the process of watching the video, the user needs to intercept the certain picture from the played video and store the certain picture. In the image capturing technology in the prior art, a user can submit a screenshot command to a terminal, the terminal needs to stop a currently played video and store the currently stopped video image, the image capturing scheme is to capture a certain picture of the video in time, and the image capturing processing efficiency is very low.
Disclosure of Invention
The embodiment of the invention provides a video image capturing method and device, which are used for improving the video image capturing processing efficiency.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for capturing a video image, including:
receiving an image interception instruction sent by a user through a current playing terminal, wherein the image interception instruction comprises: intercepting a video time point, an image intercepting area defined by the user in a playing interface of the current playing terminal and a target use selected by the user;
acquiring decoded video data corresponding to an image interception area in a video file played at the video interception time point according to the image interception instruction;
carrying out file format coding on the obtained decoded video data according to the image intercepting instruction, and generating a video screenshot obtained by intercepting the video file;
outputting the video clip according to the target use.
In a second aspect, an embodiment of the present invention further provides a terminal, including:
the receiving module is used for receiving an image intercepting instruction sent by a user through a current playing terminal, and the image intercepting instruction comprises: intercepting a video time point, an image intercepting area defined by the user in a playing interface of the current playing terminal and a target use selected by the user;
the video data acquisition module is used for acquiring decoded video data corresponding to an image interception area in a video file played at the video interception time point according to the image interception instruction;
the file coding module is used for carrying out file format coding on the obtained decoded video data according to the image intercepting instruction and generating a video screenshot intercepted from the video file;
and the video screenshot output module is used for outputting the video clip according to the target purpose.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the present invention, when a user sends an image capturing command through a current playing terminal, the user first receives an image capturing instruction, where the image capturing instruction may include: the method comprises the steps of intercepting a video time point, an image intercepting area defined by a user and target purposes, obtaining decoded video data corresponding to the image intercepting area in the video file which is currently played when the playing time reaches the video intercepting time point after a playing interface in the terminal starts playing the video file, and then carrying out file format coding on the obtained decoded video data according to an image intercepting command, so that a video screenshot obtained by intercepting from the video file can be generated. In the embodiment of the invention, the video screenshot needing to be intercepted is obtained by acquiring the decoded video data corresponding to the image intercepting area in the video file being played and then carrying out the file format coding on the decoded video data, rather than obtaining the video screenshot by grabbing the video image.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings.
Fig. 1 is a schematic flowchart of a video image capturing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an acquisition mode of an image capture area in the embodiment of the present invention;
FIG. 3 is a schematic diagram of an intercepting process of a video image according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a process for decoding video data according to an embodiment of the present invention;
fig. 5-a is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 5-b is a schematic structural diagram of another terminal according to an embodiment of the present invention;
fig. 5-c is a schematic view of a composition structure of another terminal according to an embodiment of the present invention;
fig. 5-d is a schematic diagram of a composition structure of another terminal according to an embodiment of the present invention;
fig. 5-e is a schematic structural diagram of another terminal according to an embodiment of the present invention;
fig. 5-f is a schematic structural diagram of another terminal according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a composition of a terminal to which the video image capturing method according to the embodiment of the present invention is applied.
Detailed Description
The embodiment of the invention provides a video image capturing method and device, which are used for improving the video image capturing processing efficiency.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one skilled in the art from the embodiments given herein are intended to be within the scope of the invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The following are detailed below.
Referring to fig. 1, an embodiment of the video image capturing method according to the present invention may be specifically applied to a scene where a video image needs to be captured in a terminal for playing a video, and the video image capturing method according to the embodiment of the present invention may include the following steps:
101. and receiving an image interception instruction sent by a user through the current playing terminal.
Wherein the image interception instruction comprises: the method comprises the steps of intercepting a video time point, an image intercepting area defined by a user in a playing interface of a current playing terminal and a target use selected by the user.
In the embodiment of the present invention, when a user plays a video in an operation terminal, and the user views a video with a very high interest, the user may operate a video capture button on the terminal, so as to trigger the terminal to perform video image capture, for example, a capture image button is displayed on a touch screen of the terminal, when the user needs to capture an image from the video, the user clicks the capture image button on the touch screen, and the user sends an image capture instruction to the terminal, where the image capture instruction includes a video capture time point required by the user, without limitation, in the embodiment of the present invention, the user may directly determine a video image to be captured or consecutive video images or non-consecutive video images, and then the user may send an image capture instruction to the terminal, where the image capture instruction includes one or more video capture time points, alternatively, the user may send a plurality of image capturing instructions to the terminal, where each image capturing instruction includes a video capturing time point, so that the terminal can determine from which time point to capture the video, and whether a single video image or a plurality of video images need to be captured.
In addition, in the embodiment of the present invention, when a user needs to intercept a partial picture area of a playing interface of a current playing terminal, but does not need to intercept a video picture of a whole playing interface, the user equipment may define an image interception area in the playing interface of the current playing terminal, and then does not intercept the picture outside the image interception area, and at this time, the user equipment may also carry the image interception area defined by the user from the playing interface in the video interception instruction. In addition, the user can select a target purpose through the video capturing instruction to instruct the terminal to capture the video image and output the video image according to the specific target purpose, for example, the user archives the captured video image, or shares the captured video image to a QQ space or a WeChat after the archives, and the target purpose indicates the specific purpose of the video image which the user needs to output, so that the video image captured by the video capturing method can meet the requirement of the user on the target purpose.
In some embodiments of the present invention, the image capture instruction sent to the terminal by the user may include, in addition to the time point of capturing the video, the image capture area defined by the user in the play interface of the currently playing terminal, and the target use selected by the user, other information that the user needs to indicate the terminal, for example, the user may indicate to the terminal what image parameter requirements the video image should be output, that is, in the present invention, the corresponding video capture may be further output for the captured and output video capture according to the image parameters required by the user, so that more requirements of the user on the captured video may be satisfied.
Specifically, in some embodiments of the present invention, the image capture instruction may further specifically include a target file format selected by the user, that is, the user may instruct the terminal to output the video capture with the image parameter in the target file format, where the file format refers to a format of a file of the video image itself, for example, jpg, png, bmp, and the like, and the target file format indicates a specific file format that the user needs to output, and then the video capture obtained by video capture in the present invention may meet a requirement of the user on the target file format.
In some embodiments of the present invention, the image capture instruction may further specifically include a target resolution selected by the user, that is, the user may instruct the terminal to output a video screenshot in which an image parameter is the target resolution, where the resolution refers to a setting of how much information is displayed on the video image, and usually, the width and the height are both in 16 times as a step unit, for example, 16 × n (n is 1,2, 3....) such as 176 × 144, 352 × 288, and the like, and the target resolution indicates a specific resolution that the user needs to output, and the video screenshot obtained by video capture in the present invention may meet the user's requirement on the target resolution.
In some embodiments of the present invention, the image capturing instruction may further specifically include a target image format selected by the user, that is, the user may instruct the terminal to output a video screenshot in which the image parameter is in the target image format, where the image format refers to an image content encoding format of the video image, for example, jpg, png, bmp, and the like, and the target image format indicates a specific image format that the user needs to output, and then the video screenshot obtained by video capturing in the present invention may meet a requirement of the user on the target image format.
In some embodiments of the present invention, the image capturing instruction may further specifically include a target image quality selected by the user, that is, the user may instruct the terminal to output a video screenshot in which the video parameter is the target image quality, where the image quality refers to a video transmission level requirement of the video image, and may represent complexity of an image format, for example, the image quality is divided into 3 levels or 5 levels, the user may select a required target image quality as level iii, and the target image quality indicates a specific image quality level that the user needs to output, and then the video screenshot obtained by video capturing in the present invention may meet a requirement of the user on the target image quality.
In some embodiments of the present invention, the image capture instruction may further specifically include a target purpose selected by the user, that is, the user may instruct the terminal to output the video screenshot of a specific purpose, where the target purpose refers to an output path of the captured video image, for example, the video screenshot may be an archived file or shared after archiving, and the target purpose indicates a specific purpose of the video screenshot that the user needs to output, and then the video screenshot obtained by video capture in the present invention may meet a requirement of the user for the target purpose.
It should be noted that, in the foregoing, various image parameters included in the image capturing instruction received by the terminal in the present invention are described in detail, it is to be understood that the image capturing instruction in the present invention may further include one or more image parameters described above, which specifically requires a user to select which image parameter or parameters, and may be determined specifically by combining with an application scene.
102. And acquiring decoded video data corresponding to the image interception area in the video file played at the video interception time point according to the image interception instruction.
In the embodiment of the invention, after a terminal receives an image intercepting instruction comprising an intercepting video time point, the terminal monitors a video file currently played in a playing screen of the terminal to obtain the progress of playing time, when the playing time reaches the intercepting video time point, the playing time currently played is the intercepting video time point, at the intercepting video time point, the terminal obtains decoded video data corresponding to an image intercepting area in the video file currently played, and the decoded video data obtained by the terminal is the video file played corresponding to the intercepting video time point.
The video playing process is a process of decoding a video file into original data and then displaying the original data, the video time point is intercepted as a mark, the video file which is currently played is obtained, the video file is decoded into decoded video data by a software decoder or a hardware decoder, corresponding decoded video data can be correspondingly found from the video file which is currently played according to the corresponding relation before and after decoding, the decoded video data is usually in an original data format and consists of three components of Y (brightness), U (chroma) and V (chroma), the video playing process is usually used in the field of video compression, and the decoded video data can be YUV 420. For example, the time axis of the playing time shows that a video file of 4 minutes and 20 seconds is being played, and if the video capturing time point carried in the image capturing instruction received by the terminal is 4 minutes and 22 seconds, the time axis of the current playing time is shifted to 4 minutes and 22 seconds, and the decoded video data corresponding to the image capturing area in the video file being played at that time is obtained.
In the embodiment of the present invention, after receiving an image capture instruction including a video capture time point, a terminal acquires decoded video data at the video capture time point, and it can be understood that, in the present invention, if the image capture instruction includes a plurality of video capture time points, the terminal may acquire a plurality of corresponding decoded video data according to the plurality of video capture time points.
In some embodiments of the present invention, the step 102, according to the image capture instruction, obtains decoded video data corresponding to the image capture area in the video file played at the time point of capturing the video, and may further include the following steps:
a1, calculating the offset position between the image capture area and the playing interface of the current playing terminal;
a2, determining the coordinate mapping relation between the image capture area and the video image in the video file played at the video capture time point according to the calculated offset position;
and A3, reading the decoded video data corresponding to the image intercepting area from the frame buffer of the current playing terminal according to the coordinate mapping relation.
The terminal obtains an image capturing area defined by the user in the playing interface according to the adjustment condition of the image capturing frame by the user, so that the terminal can determine which part or all of the video pictures in the playing interface the user needs to capture. Referring to fig. 2, which is a schematic diagram illustrating an obtaining manner of an image capture area according to an embodiment of the present invention, in fig. 2, an area a is a full screen area of a terminal, areas B to C are video playing areas, an area B is a playing interface, and an area C is an image capture area defined by a user. Of course, the position and the area size of the area C may be adjusted by the user dragging the image cutout frame.
After determining the image capturing area defined by the user, step a1 is executed, and the terminal calculates the offset position between the image capturing area and the playing interface of the current playing terminal, that is, the playing interface of the terminal is a rectangular frame, the image capturing area is a rectangular frame, and the offset positions of the four corners of the image capturing area relative to the four corners of the playing interface of the current playing terminal need to be calculated, so that the offset position between the image capturing area and the playing interface of the current playing terminal can be determined. As shown in fig. 2, when the video file is played on the display screen, the video file may be played on the full screen, as shown in the area a in fig. 2, or may be played on a non-full screen, as shown in the area B in fig. 2. Any one of the regions B to a may be used. In any area, the user can mark out a square area in the video playing area to be used as the image clipping area to be clipped, and the offset position of the marked-out area relative to the four corners of the video playing area can be calculated according to the pixel position relation.
After the offset position of the image capturing area relative to the video playing interface is obtained, step a2 is executed, and according to the calculated offset position, the coordinate mapping relationship between the image capturing area and the video image in the video file currently being played is determined. That is, the offset position of the image capture area calculated in step a2 with respect to the video playing interface, and there is a scaling relationship between the video playing interface and the original video image, and if the video playing interface is the same as the original video image, it is a one-to-one equal proportion, and if the user is operating the terminal, it is also possible that the original video image is enlarged or reduced to be displayed as the current video playing interface, and then the offset position of the image capture area calculated with respect to the video playing interface needs to be remapped to obtain the coordinate mapping relationship between the image capture area and the video image in the video file currently being played. For example, as shown in fig. 2, for the original video image coordinate mapping, since the areas B to C are uncertain, that is, the size of the video playing area is not necessarily equal to the size of the original video image, after the offset position is completed, the coordinate mapping relationship of the offset position in the original video image also needs to be calculated.
In some embodiments of the present invention, step a3 reads decoded video data corresponding to the image capture area from a frame buffer of a current playing terminal according to a coordinate mapping relationship, where when a video file is being played in the current playing terminal, the video file has been decoded into decoded video data by a software decoder or a hardware decoder, the terminal reads the decoded video data from the frame buffer, and then the terminal outputs the read decoded video data to a display screen to be displayed as a playing interface, in the present invention, the decoded video data of the video file being played at the playing time at the video capture time point can be obtained by means of the decoded video data stored in the frame buffer, and after the decoded video data corresponding to the video file being played is obtained, scaling is performed according to the coordinate mapping relationship, and acquiring decoded video data corresponding to the image intercepting area, wherein the decoded video data outside the image intercepting area in the playing interface is not in the range of the acquired decoded video.
It should be noted that, in some embodiments of the present invention, the terminal may further have another implementation manner for obtaining the decoded video data corresponding to the image capture area in the video file played at the video capture time point, for example, first obtaining a source file corresponding to the video file currently being played, then re-decoding the source file, generating decoded video data, performing scaling according to a coordinate mapping relationship, obtaining the decoded video data corresponding to the image capture area, and obtaining the decoded video data according to such a manner.
In some embodiments of the present invention, if the image capturing instruction further includes a target resolution selected by the user, step 103 performs file format encoding on the obtained decoded video data according to the image capturing instruction, and before generating a video screenshot captured from the video file, the video image capturing method provided by the present invention may further include the following steps:
b1, judging whether the original resolution and the target resolution of the video image in the video file corresponding to the acquired decoded video data are the same;
and B2, if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
After the decoded video data is obtained in step 102, if the image capture instruction received by the terminal further includes a target resolution, it indicates that the user needs to specify the resolution of the captured video capture, the terminal may first obtain an original resolution of the video image from file header information of the video file, where the original resolution of the video image in the video file is a resolution displayed when the video file played in the display screen of the terminal is played, if the original resolution of the video image in the video file needs to be adjusted by the user, a resolution adjustment menu may be displayed on the display screen of the terminal, the user specifies the resolution of the captured video capture (i.e., the target resolution carried in the image capture instruction), after obtaining the original resolution of the video image in the video file, it is determined whether the target resolution is the same as the original resolution, and if the target resolution is the same as the original resolution, then, resolution conversion is not needed, and if the target resolution is different from the original resolution, resolution conversion is needed, specifically, a third party library (e.g., ffmpeg) may be called to implement resolution conversion, so as to obtain the obtained decoded video data including the target resolution, and then the file format encoding performed in the subsequent step 103 is the obtained decoded video data including the target resolution, which is described here, that is, the obtained decoded video data in the step 103 is specifically the obtained decoded video data including the target resolution.
In some embodiments of the present invention, if the image capturing instruction further includes a target resolution selected by the user, in the application scenario where the steps a1 to A3 are performed, before the step 103 performs file format coding on the obtained decoded video data according to the image capturing instruction and generates a video screenshot captured from the video file, the video image capturing method provided by the present invention may further include the following steps:
c1, calculating a resolution mapping value by using the coordinate mapping relation and the original resolution of the video image in the video file corresponding to the acquired decoded video data;
c2, judging whether the resolution mapping value is the same as the target resolution or not;
and C3, if the resolution mapping value is not the same as the target resolution, zooming the video image in the video file corresponding to the acquired decoded video data to obtain the zoomed acquired decoded video data.
Wherein, after the decoded video data is obtained in step 102, if the image capture instruction received by the terminal further includes a target resolution, it indicates that the user needs to specify the resolution of the captured video capture, the terminal may first obtain an original resolution of the video image from file header information of the video file, where the original resolution of the video image in the video file is a resolution displayed when the video file played in the display screen of the terminal is played, and if the user needs to adjust the original resolution of the video image in the video file, a resolution adjustment menu may be displayed on the display screen of the terminal, the user specifies the resolution of the captured video capture (i.e., the target resolution carried in the image capture instruction) and, after obtaining the original resolution of the video image in the video file, the user adjusts the original video image in combination with the application scenes for executing steps a1 to A3, then, a coordinate mapping relationship may be generated according to the above steps a1 to A3, that is, a coordinate mapping relationship between an image capture area and a video image in a currently playing video file is calculated by combining the coordinate mapping relationship with an original resolution, and then, whether a target resolution is the same as the resolution mapping value is determined, if the target resolution is the same as the resolution mapping value, the video image in the video file does not need to be scaled, and if the target resolution is not the same as the resolution mapping value, the video image in the video file needs to be scaled, specifically, a third party library (e.g., ffmpeg) may be called to implement scaling processing of the video image, and the scaled decoded video data is obtained, and then the decoded video data obtained in the subsequent step 103 in the file format encoding is the scaled decoded video data obtained in the subsequent step described herein, that is the decoded video data obtained in the step 103, specifically, the decoded video data obtained in the step 103 is the scaled decoded video data obtained in the step after the scaling Frequency data.
In some embodiments of the present invention, if the image capture instruction further includes a target image format selected by the user, step 103 performs file format encoding on the obtained decoded video data according to the image capture instruction, and before generating a video screenshot captured from the video file, the video image capture method provided by the present invention may further include the following steps:
d1, obtaining the original video format of the video file corresponding to the obtained decoded video data;
and D2, converting the video format of the video file corresponding to the acquired decoded video data into a target image format to obtain the acquired decoded video data containing the target image format.
Wherein, after the decoded video data is obtained in step 102, if the image capture instruction received by the terminal also includes a target image format, it indicates that the user needs to specify the image format of the captured video capture, the terminal can first obtain the original video format of the video image from the file header information of the video file, the original video format of the video image in the video file is the video format when the video file played in the display screen of the terminal is played, if the user needs to adjust the original video format of the video image in the video file, an image format adjustment menu can be displayed on the display screen of the terminal, the user specifies the image format of the captured video capture (i.e. the target image format carried in the image capture instruction), after obtaining the original video format of the video image in the video file, the image format conversion is performed on the original video format, specifically, a third party library (e.g., ffmpeg) may be called to implement the conversion of the image format, so as to obtain the obtained decoded video data including the target image format, and then the obtained decoded video data including the target image format, which is described here is subjected to the file format encoding in the subsequent step 103, that is, the obtained decoded video data in the step 103 is specifically the obtained decoded video data including the target image format.
In some embodiments of the present invention, if the image capture instruction further includes a target image quality selected by the user, step 103 performs file format encoding on the obtained decoded video data according to the image capture instruction, and before generating a video screenshot captured from the video file, the video image capture method provided by the present invention may further include the following steps:
e1, obtaining the original video quality of the video file corresponding to the obtained decoded video data;
e2, adjusting the video quality of the video file corresponding to the acquired decoded video data to the target image quality, and obtaining the acquired decoded video data containing the target image quality.
Wherein, after the decoded video data is obtained in step 102, if the image capture instruction received by the terminal also includes the target image quality, it indicates that the user needs to specify the image quality of the captured video capture, the terminal may first obtain the original video quality of the video image from the file header information of the video file, the original video quality of the video image in the video file is the video quality displayed when the video file played in the display screen of the terminal is played, if the user needs to adjust the original video quality of the video image in the video file, an image quality adjustment menu may be displayed on the display screen of the terminal, the user specifies the image quality of the captured video capture (i.e. the target image quality carried in the image capture instruction), after obtaining the original video quality of the video image in the video file, the original video quality is adjusted to the target image quality, specifically, a third party library (e.g., ffmpeg) may be called to implement image quality conversion, so as to obtain the obtained decoded video data with the target image quality, and what is described here as the obtained decoded video data with the target image quality is subjected to file format encoding in the subsequent step 103, that is, the obtained decoded video data with the target image quality in the step 103 is specifically the obtained decoded video data with the target image quality.
103. And carrying out file format coding on the obtained decoded video data according to the image interception instruction, and generating a video screenshot obtained by intercepting from the video file.
In the embodiment of the present invention, the decoded video data corresponding to the image capture area in the video file played at the video capture time point is obtained in step 102, and then the obtained decoded video data is packaged, so that the decoded video data obtained in step 102 is packaged into a file form, that is, the obtained decoded video data can be subjected to file format coding, so that a video screenshot that a user needs to capture can be obtained, and the generated video screenshot is obtained from the video file played in the play interface of the terminal.
In some embodiments of the present invention, if the image capture instruction further includes a target file format selected by the user, the step of performing file format encoding on the obtained decoded video data according to the image capture instruction to generate a video screenshot captured from the video file may specifically include the following steps:
and G1, encoding the acquired decoded video data into a video screenshot meeting the target file format by using a file synthesizer.
After the decoded video data is obtained in step 102, if the image capture instruction received by the terminal further includes a target file format, it indicates that the user needs to specify a file format of the captured video capture, after the decoded video data is obtained in step 102, the obtained decoded video data may be specifically encoded into a video capture meeting the target file format by using a file synthesizer, specifically, a third-party library (e.g., ffmpeg) may be called to implement conversion of the file format, so as to obtain a video image meeting the target file format, when the file synthesizer is used, file header information may be carried in the generated video capture, and the file header information carries basic characteristic information of the video capture, for example, the file header information includes: attribute information of the video screenshot. In addition, in some embodiments of the present invention, the generated video screenshot may also adopt a default configuration mode instead of carrying the header information.
104. And outputting the video screenshot according to the target purpose selected by the user.
In the embodiment of the present invention, after the video screenshot is captured from the video file, the video screenshot may also be output to a specific purpose application according to a user's requirement, for example, the user archives the captured video screenshot, or shares the captured video screenshot in a QQ space or a WeChat after archiving, and the target purpose indicates a specific purpose of a video image that the user needs to output, so that the video screenshot obtained by capturing the video in the present invention may satisfy the user's requirement for the target purpose.
In some embodiments of the present invention, after step 103 performs file format coding on the obtained decoded video data according to the image capture instruction, and generates a video capture captured from the video file, the method for capturing a video image according to the embodiments of the present invention may further include the following steps:
h1, according to the method of intercepting video screenshots from video files, respectively acquiring decoded video data corresponding to image intercepting areas in video files played at a plurality of video intercepting time points according to a plurality of image intercepting instructions sent by a user, carrying out file format coding on the acquired decoded video data according to the plurality of image intercepting instructions, and intercepting a plurality of video screenshots from the video files;
h2, synthesizing the multiple video screenshots according to the sequence of the video interception time points to obtain a video clip intercepted from the video file;
h3, outputting the video clip according to the target purpose.
In step H1, the user may obtain decoded data corresponding to multiple video files by sending multiple image capture instructions, for example, by performing the foregoing steps 101 to 103 multiple times, multiple video screenshots may be obtained, and then the multiple obtained video images are synthesized according to the order of the video capture time points in step H2, so that a video clip of continuous multiple video screenshots may be obtained, and finally the video clip may be output according to the target purpose selected by the user. As can be seen from steps H1 to H3, the method for capturing video images provided by the embodiment of the present invention can also be applied to capturing video clips.
As can be seen from the above description of the embodiments of the present invention, when a user sends an image capturing command through a current playing terminal, the image capturing command is received first, where the image capturing command may include: the method comprises the steps of intercepting a video time point, an image intercepting area defined by a user and target purposes, obtaining decoded video data corresponding to the image intercepting area in the video file which is currently played when the playing time reaches the video intercepting time point after a playing interface in the terminal starts playing the video file, and then carrying out file format coding on the obtained decoded video data according to an image intercepting command, so that a video screenshot obtained by intercepting from the video file can be generated. In the embodiment of the invention, the video screenshot needing to be intercepted is obtained by acquiring the decoded video data corresponding to the image intercepting area in the video file being played and then carrying out the file format coding on the decoded video data, rather than obtaining the video screenshot by grabbing the video image.
In order to better understand and implement the above-mentioned schemes of the embodiments of the present invention, the following description specifically illustrates corresponding application scenarios.
Taking the example that the user watches the video by using the QQ browser as an example, when the user encounters a favorite video picture, the user can select to capture the whole video picture or a part of the video picture, make a single picture or multiple pictures, and store the pictures locally or share the pictures with friends. Please refer to fig. 3, which is a schematic diagram illustrating a video image capturing process according to the present invention.
S1, calculating the offset position of the image capturing area
When the video file is played on the display screen of the terminal, the video file may be played on a full screen, as shown in an area a in fig. 2, or may be played on a non-full screen, as shown in an area B in fig. 2. Any one of the regions B to a may be used. Regardless of the area, the user can draw a square area in the video playing area to be used as the image capturing area to be captured, and first needs to calculate the offset position of the drawn area relative to the four corners of the video playing area.
S2 coordinate mapping of original video image
Since the areas B to C are uncertain, that is, the size of the video playing area is not necessarily equal to the size of the original video image, after the offset position is completed, the coordinate mapping relationship of the offset position in the original video image needs to be calculated.
After S1 and S2 are completed, the following menu selections P1, P2 and P3 are performed, and a menu is required to be provided on the display screen of the terminal for the user to select, specifically, the following menu selections are included:
p1, use selection: it is determined whether the captured video shot is an archive only file or is shared after archive.
P2, configuration selection: resolution, image format, image quality, file format, video time point of capture.
P3, mode selection: it is determined whether a single video shot or multiple video shots need to be taken.
S3 processing of decoded video data
When the user performs the area demarcating operation of S1, the processing is started from the current time point by default. The process of video playing is a process of decoding a video file into original data and then displaying the original data, and the original data is usually in YUV420 format. The video screenshot is synthesized from the original data, so that the link of decoding the source file again can be omitted, the processor resource of the terminal can be saved, and the electric quantity of the terminal can be saved.
As shown in fig. 4, a schematic view of a processing flow of decoded video data according to an embodiment of the present invention is provided, where the process may specifically include the following steps:
and m1, acquiring information such as target resolution, target image format and target image quality, target file format, video capturing time point, picture quantity and the like selected by the user from the image capturing instruction.
And step m2, opening the encoder according to the image format to be encoded.
Step m3, according to the file format, open the file synthesizer and generate the file header information, wherein the file header information can be null.
And m4, obtaining decoded video data corresponding to the video interception time point from the decoding link of the current playing process.
And step m5, converting colors, namely, the decoded video data adopts YUV colors, and can be converted into RGB colors according to requirements.
And m6, determining whether to perform zooming processing according to the information obtained in the step m1, for example, a user demarcates a view image intercepting area, compares the current player range of the image intercepting area to obtain a proportional relation, calculates a size by combining the proportional relation with the original resolution, and if the size is different from the target resolution, needs to perform zooming processing so that the resolution of the output video screenshot meets the requirement. If not, no scaling is required.
And step m7, calling an encoder to encode the encoded video data according to the target image format.
And step m8, calling a file synthesizer, coding the coded video data according to the target file format, and generating the video screenshot.
It should be noted that, in the present invention, the process of processing the encoded video data is synchronized with the playing process of the video file, and if a plurality of video screenshots are synthesized, the above processes are repeated.
S4, output of video screenshot
When the video screenshots are synthesized, the user is prompted to succeed. According to the selection mode of P1, if archiving is performed, a third-party application is called to open the image folder. If the sharing is carried out, the third-party application is called to carry out sharing, and the sharing can be carried out in a mode such as but not limited to WeChat and QQ.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
To facilitate a better implementation of the above-described aspects of embodiments of the present invention, the following also provides relevant means for implementing the above-described aspects.
Referring to fig. 5-a, an apparatus 500 for capturing a video image according to an embodiment of the present invention includes: a receiving module 501, a video data acquiring module 502, a file encoding module 503, and a video screenshot outputting module 504, wherein,
a receiving module 501, configured to receive an image capturing instruction sent by a user through a current playing terminal, where the image capturing instruction includes: intercepting a video time point, an image intercepting area defined by the user in a playing interface of the current playing terminal and a target use selected by the user;
a video data obtaining module 502, configured to obtain, according to the image capture instruction, decoded video data corresponding to an image capture area in a video file played at the video capture time point;
the file encoding module 503 is configured to perform file format encoding on the obtained decoded video data according to the image capture instruction, and generate a video screenshot captured from the video file;
a video screenshot output module 504 for outputting the video clip according to the target use.
In some embodiments of the present invention, referring to fig. 5-b, the video data obtaining module 502 comprises:
a position calculating unit 5021, configured to calculate an offset position between the image capturing area and a playing interface of the current playing terminal;
a mapping relation determining unit 5022, configured to determine a coordinate mapping relation between the image capturing area and a video image in a video file played at the captured video time point according to the calculated offset position;
and the video data reading unit 5023 is used for reading the decoded video data corresponding to the image intercepting area from the frame buffer of the current playing terminal according to the coordinate mapping relation.
In some embodiments of the present invention, if the image capture instruction further includes a target file format selected by the user, the file encoding module 503 is configured to encode the obtained decoded video data into a video screenshot meeting the target file format by using a file synthesizer.
In some embodiments of the present invention, referring to fig. 5-c, if the image capturing instruction further includes a target resolution selected by a user, the apparatus 500 for capturing a video image further includes: a resolution coordination module 505, configured to determine whether an original resolution of a video image in a video file corresponding to the obtained decoded video data is the same as the target resolution before the file coding module 503 performs file format coding on the obtained decoded video data according to the image capture instruction and generates a video screenshot obtained from the video file; and if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
In some embodiments of the present invention, if the image capturing instruction further includes a target resolution selected by a user, the video image capturing apparatus further includes: a resolution coordination module 505, configured to calculate a resolution mapping value by using the coordinate mapping relationship and an original resolution of a video image in a video file corresponding to the obtained decoded video data before the file coding module performs file format coding on the obtained decoded video data according to the image capture instruction and generates a video screenshot obtained from the video file; judging whether the resolution mapping value is the same as the target resolution or not; if the resolution mapping value is different from the target resolution, zooming the video image in the video file corresponding to the obtained decoded video data to obtain the zoomed obtained decoded video data.
In some embodiments of the present invention, referring to fig. 5-d, if the image capturing instruction further includes a target video format selected by the user, the video image capturing apparatus 500 further includes: an image format coordination module 506, configured to perform, by the file encoding module 503, file format encoding on the obtained decoded video data according to the image capture instruction, and obtain an original video format of a video file corresponding to the obtained decoded video data before generating a video screenshot obtained by capturing from the video file; and converting the video format of the video file corresponding to the obtained decoded video data into a target image format to obtain the obtained decoded video data containing the target image format.
In some embodiments of the present invention, referring to fig. 5-e, if the image capturing instruction further includes a target video quality selected by a user, the video image capturing apparatus 500 further includes: an image quality coordination module 507, configured to perform file format coding on the obtained decoded video data according to the image capture instruction by the file coding module 503, and obtain original video quality of a video file corresponding to the obtained decoded video data before generating a video screenshot obtained by capturing from the video file; and adjusting the video quality of the video file corresponding to the obtained decoded video data to the target image quality to obtain the obtained decoded video data containing the target image quality.
In some embodiments of the present invention, referring to fig. 5-f, if the image capturing instruction further includes a target purpose selected by the user, the video image capturing apparatus 500 further includes: an image merging module 508 that, among other things,
the video data obtaining module 502 is further configured to obtain, according to a method of capturing a video screenshot from the video file, decoded video data corresponding to an image capturing area in the video file played at a plurality of video capturing time points according to a plurality of image capturing instructions sent by a user;
the file encoding module 503 is further configured to perform file format encoding on the obtained decoded video data according to a plurality of image capturing instructions, and capture a plurality of video screenshots from the video file
The image merging module 508 is configured to combine the multiple video screenshots according to a sequence of video capturing time points to obtain a video clip captured from the video file;
the video screenshot output module 504 is further configured to output the video clip according to the target purpose.
As can be seen from the above description of the embodiment of the present invention, when a user sends an image capturing command through a current playing terminal, the user first receives an image capturing instruction, where the image capturing instruction may include: the method comprises the steps of intercepting a video time point, an image intercepting area defined by a user and target purposes, obtaining decoded video data corresponding to the image intercepting area in the video file which is currently played when the playing time reaches the video intercepting time point after a playing interface in the terminal starts playing the video file, and then carrying out file format coding on the obtained decoded video data according to an image intercepting command, so that a video screenshot obtained by intercepting from the video file can be generated. In the embodiment of the invention, the video screenshot needing to be intercepted is obtained by acquiring the decoded video data corresponding to the image intercepting area in the video file being played and then carrying out the file format coding on the decoded video data, rather than obtaining the video screenshot by grabbing the video image.
As shown in fig. 6, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present invention. The terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of sales), a vehicle-mounted computer, etc., taking the terminal as the mobile phone as an example:
fig. 6 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present invention. Referring to fig. 6, the handset includes: radio Frequency (RF) circuit 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuit 660, wireless fidelity (WiFi) module 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the handset configuration shown in fig. 6 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 6:
the RF circuit 610 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 680; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 610 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 or near the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch panel 631 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 640 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 631 can cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 6, the touch panel 631 and the display panel 641 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 650, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 641 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuit 660, speaker 661, and microphone 662 can provide an audio interface between a user and a cell phone. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signals into electrical signals, which are received by the audio circuit 660 and converted into audio data, which are processed by the audio data output processor 680 and then transmitted via the RF circuit 610 to, for example, another cellular phone, or output to the memory 620 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 670, and provides wireless broadband Internet access for the user. Although fig. 6 shows the WiFi module 670, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 680 is a control center of the mobile phone, and connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby performing overall monitoring of the mobile phone. Optionally, processor 680 may include one or more processing units; preferably, the processor 680 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.
The handset also includes a power supply 690 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 680 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present invention, the processor 680 included in the terminal further has a flow for controlling and executing the above capturing method of the video image executed by the terminal.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and may also be implemented by special hardware including special integrated circuits, special CPUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, the implementation of a software program is a more preferable embodiment for the present invention. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
In summary, the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the above embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the above embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A method for capturing a video image, comprising:
receiving an image interception instruction sent by a user through a current playing terminal, wherein the image interception instruction comprises: intercepting a video time point, an image intercepting area defined by the user in a playing interface of the current playing terminal and a target use selected by the user; a zooming relation exists between the playing interface and the original video image;
calculating the offset position between the image interception area and the playing interface of the current playing terminal;
determining a coordinate mapping relation between the image capturing area and a video image in a video file played at the video capturing time point according to the calculated offset position and the scaling relation between the playing interface and the original video image;
carrying out scale transformation according to the coordinate mapping relation, and reading decoded video data corresponding to the image intercepting area from a frame cache of the current playing terminal;
carrying out file format coding on the obtained decoded video data according to the image intercepting instruction, and generating a video screenshot obtained by intercepting the video file;
and outputting the video screenshot according to the target purpose.
2. The method according to claim 1, wherein if the image capture instruction further includes a target file format selected by a user, the performing file format encoding on the obtained decoded video data according to the image capture instruction to generate a video screenshot captured from the video file includes:
and encoding the obtained decoded video data into a video screenshot meeting the target file format by using a file synthesizer.
3. The method according to any one of claims 1 to 2, wherein if the image capture instruction further includes a target resolution selected by a user, the method further includes, before performing file format encoding on the obtained decoded video data according to the image capture instruction and generating a video screenshot captured from the video file:
judging whether the original resolution of the video image in the video file corresponding to the obtained decoded video data is the same as the target resolution or not;
and if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
4. The method according to claim 1, wherein if the image capture instruction further includes a target resolution selected by a user, before performing file format encoding on the obtained decoded video data according to the image capture instruction and generating a video screenshot captured from the video file, the method further includes:
calculating a resolution mapping value by using the coordinate mapping relation and the original resolution of the video image in the video file corresponding to the acquired decoded video data;
judging whether the resolution mapping value is the same as the target resolution or not;
if the resolution mapping value is different from the target resolution, zooming the video image in the video file corresponding to the obtained decoded video data to obtain the zoomed obtained decoded video data.
5. The method according to any one of claims 1 to 2, wherein if the image capture instruction further includes a target image format selected by a user, the method further includes, before performing file format encoding on the obtained decoded video data according to the image capture instruction and generating a video screenshot captured from the video file:
obtaining an original video format of a video file corresponding to the obtained decoded video data;
and converting the video format of the video file corresponding to the obtained decoded video data into a target image format to obtain the obtained decoded video data containing the target image format.
6. The method according to any one of claims 1 to 2, wherein if the image capture instruction further includes a target image quality selected by a user, the method further includes, before performing file format encoding on the obtained decoded video data according to the image capture instruction and generating a video screenshot captured from the video file:
obtaining the original video quality of a video file corresponding to the obtained decoded video data;
and adjusting the video quality of the video file corresponding to the obtained decoded video data to the target image quality to obtain the obtained decoded video data containing the target image quality.
7. The method according to claim 1, wherein after the file format encoding is performed on the obtained decoded video data according to the image capture instruction and a video screenshot captured from the video file is generated, the method further comprises:
according to the method for intercepting the video screenshots from the video file, the decoded video data corresponding to the image intercepting areas in the video file played at a plurality of video intercepting time points are respectively obtained according to a plurality of image intercepting instructions sent by a user, the file format coding is carried out on the obtained decoded video data according to the plurality of image intercepting instructions, and a plurality of video screenshots are intercepted from the video file;
synthesizing the plurality of video screenshots according to the sequence of the video interception time points to obtain a video clip intercepted from the video file;
outputting the video clip according to the target use.
8. An apparatus for capturing a video image, comprising:
the receiving module is used for receiving an image intercepting instruction sent by a user through a current playing terminal, and the image intercepting instruction comprises: intercepting a video time point, an image intercepting area defined by the user in a playing interface of the current playing terminal and a target use selected by the user; a zooming relation exists between the playing interface and the original video image;
the video data acquisition module is used for calculating the offset position between the image interception area and the playing interface of the current playing terminal;
determining a coordinate mapping relation between the image capturing area and a video image in a video file played at the video capturing time point according to the calculated offset position and the scaling relation between the playing interface and the original video image;
carrying out scale transformation according to the coordinate mapping relation, and reading decoded video data corresponding to the image intercepting area from a frame cache of the current playing terminal;
the file coding module is used for carrying out file format coding on the obtained decoded video data according to the image intercepting instruction and generating a video screenshot intercepted from the video file;
and the video screenshot output module is used for outputting the video screenshot according to the target purpose.
9. The apparatus of claim 8, wherein if the image capture instruction further includes a target file format selected by a user, the file encoding module is configured to encode the captured decoded video data into a video screenshot satisfying the target file format using a file compositor.
10. The apparatus of any one of claims 8 to 9, wherein if the image capture instruction further comprises a target resolution selected by a user, the apparatus further comprises: the resolution coordination module is used for judging whether the original resolution of a video image in a video file corresponding to the obtained decoded video data is the same as the target resolution or not before the file coding module carries out file format coding on the obtained decoded video data according to the image interception instruction and generates a video screenshot obtained from the video file; and if the original resolution is different from the target resolution, converting the resolution of the video image in the video file corresponding to the acquired decoded video data to obtain the acquired decoded video data containing the target resolution.
11. The apparatus of claim 9, wherein if the image capture instruction further includes a target resolution selected by the user, the apparatus further comprises: the resolution coordination module is used for calculating a resolution mapping value by using the coordinate mapping relation and the original resolution of the video image in the video file corresponding to the obtained decoded video data before the file coding module carries out file format coding on the obtained decoded video data according to the image intercepting instruction and generates a video screenshot obtained from the video file; judging whether the resolution mapping value is the same as the target resolution or not; if the resolution mapping value is different from the target resolution, zooming the video image in the video file corresponding to the obtained decoded video data to obtain the zoomed obtained decoded video data.
12. The apparatus according to any one of claims 8 to 9, wherein if the image capture instruction further comprises a target image format selected by a user, the apparatus further comprises: the image format coordination module is used for carrying out file format coding on the obtained decoded video data by the file coding module according to the image intercepting instruction, and obtaining an original video format of a video file corresponding to the obtained decoded video data before generating a video screenshot obtained by intercepting the video file; and converting the video format of the video file corresponding to the obtained decoded video data into a target image format to obtain the obtained decoded video data containing the target image format.
13. The apparatus according to any one of claims 8 to 9, wherein if the image capturing instruction further includes a target image quality selected by a user, the apparatus further comprises: the image quality coordination module is used for carrying out file format coding on the obtained decoded video data by the file coding module according to the image intercepting instruction, and obtaining the original video quality of the video file corresponding to the obtained decoded video data before generating a video screenshot obtained by intercepting the video file; and adjusting the video quality of the video file corresponding to the obtained decoded video data to the target image quality to obtain the obtained decoded video data containing the target image quality.
14. The apparatus of claim 8, further comprising: an image merging module, wherein,
the video data acquisition module is also used for respectively acquiring decoded video data corresponding to image interception areas in video files played at a plurality of video interception time points according to a method of intercepting video screenshots from the video files and a plurality of image interception instructions sent by a user;
the file coding module is also used for carrying out file format coding on the obtained decoded video data according to a plurality of image intercepting instructions and intercepting a plurality of video screenshots from the video file
The image merging module is used for synthesizing the plurality of video screenshots according to the sequence of the video interception time points to obtain a video clip intercepted from the video file;
the video screenshot output module is further used for outputting the video clip according to the target purpose.
15. A computer-readable storage medium in which a software program and a module are stored; the software program and the module when executed implement the method of intercepting a video image of any of claims 1 to 7.
CN201510446684.2A 2015-07-27 2015-07-27 Video image intercepting method and device Active CN106412691B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201510446684.2A CN106412691B (en) 2015-07-27 2015-07-27 Video image intercepting method and device
MYPI2017704144A MY190923A (en) 2015-07-27 2016-06-16 Video sharing method and device, and video playing method and device
PCT/CN2016/085994 WO2017016339A1 (en) 2015-07-27 2016-06-16 Video sharing method and device, and video playing method and device
US15/729,439 US10638166B2 (en) 2015-07-27 2017-10-10 Video sharing method and device, and video playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510446684.2A CN106412691B (en) 2015-07-27 2015-07-27 Video image intercepting method and device

Publications (2)

Publication Number Publication Date
CN106412691A CN106412691A (en) 2017-02-15
CN106412691B true CN106412691B (en) 2020-04-07

Family

ID=58008536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510446684.2A Active CN106412691B (en) 2015-07-27 2015-07-27 Video image intercepting method and device

Country Status (1)

Country Link
CN (1) CN106412691B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910339A (en) * 2017-03-31 2017-06-30 山东高速信息工程有限公司 Road information provides method, device and processing terminal
CN107426400A (en) * 2017-04-27 2017-12-01 福建中金在线信息科技有限公司 A kind of terminal plays image-capture method and system
CN107229402B (en) * 2017-05-22 2021-08-10 努比亚技术有限公司 Dynamic screen capturing method and device of terminal and readable storage medium
CN107240144B (en) * 2017-06-08 2021-03-30 腾讯科技(深圳)有限公司 Animation synthesis method and device
CN107438175B (en) * 2017-08-14 2019-08-30 武汉微创光电股份有限公司 For the rapid screenshot method and system of extensive video
CN110020019A (en) * 2017-12-26 2019-07-16 上海创远仪器技术股份有限公司 A kind of storage of the frequency spectrum data based on bitmap shows system and method
CN108388461A (en) * 2018-02-27 2018-08-10 山东超越数控电子股份有限公司 A kind of screen picture intercept method and device for firmware
CN110876069A (en) * 2018-08-31 2020-03-10 广州虎牙信息科技有限公司 Method, device and equipment for acquiring video screenshot and storage medium
CN110191369A (en) * 2019-06-06 2019-08-30 广州酷狗计算机科技有限公司 Image interception method, apparatus, equipment and storage medium
CN112312199B (en) * 2019-07-31 2022-07-29 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN110572706B (en) * 2019-09-29 2021-05-11 深圳传音控股股份有限公司 Video screenshot method, terminal and computer-readable storage medium
CN110839181A (en) * 2019-12-04 2020-02-25 湖南快乐阳光互动娱乐传媒有限公司 Method and system for converting video content into gif based on B/S architecture
CN112423060B (en) * 2020-10-26 2023-09-19 深圳Tcl新技术有限公司 Video image interception method, system, device, terminal equipment and storage medium
CN112822544B (en) * 2020-12-31 2023-10-20 广州酷狗计算机科技有限公司 Video material file generation method, video synthesis method, device and medium
CN113038218B (en) * 2021-03-19 2022-06-10 厦门理工学院 Video screenshot method, device, equipment and readable storage medium
CN114286136B (en) * 2021-12-28 2024-05-31 咪咕文化科技有限公司 Video playing encoding method, device, equipment and computer readable storage medium
CN114173203A (en) * 2022-01-05 2022-03-11 统信软件技术有限公司 Method and device for capturing image in video playing and computing equipment
CN114615547B (en) * 2022-03-14 2022-12-06 国网福建省电力有限公司厦门供电公司 Video image processing method and system based on big data analysis
CN114915850B (en) * 2022-04-22 2023-09-12 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN114827542B (en) * 2022-04-25 2024-03-26 重庆紫光华山智安科技有限公司 Multi-channel video code stream capture method, system, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510533A (en) * 2011-12-12 2012-06-20 深圳市九洲电器有限公司 Method, device and set-top box for eliminating video capture delay
CN102802079A (en) * 2012-08-24 2012-11-28 广东欧珀移动通信有限公司 Video previewing segment generating method of media player
CN103152654A (en) * 2013-03-15 2013-06-12 杭州智屏软件有限公司 Low-latency video fragment interception technology
CN103414751A (en) * 2013-07-16 2013-11-27 广东工业大学 PC screen content sharing/interaction control method
CN103747362A (en) * 2013-12-30 2014-04-23 广州华多网络科技有限公司 Method and device for cutting out video clip
CN104079981A (en) * 2013-03-25 2014-10-01 联想(北京)有限公司 Data processing method and data processing device
CN104159151A (en) * 2014-08-06 2014-11-19 哈尔滨工业大学深圳研究生院 Device and method for intercepting and processing of videos on OTT box
CN104159161A (en) * 2014-08-25 2014-11-19 广东欧珀移动通信有限公司 Video image frame location method and device
CN104618741A (en) * 2015-03-02 2015-05-13 浪潮软件集团有限公司 Information pushing system and method based on video content

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10999637B2 (en) * 2013-08-30 2021-05-04 Adobe Inc. Video media item selections

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102510533A (en) * 2011-12-12 2012-06-20 深圳市九洲电器有限公司 Method, device and set-top box for eliminating video capture delay
CN102802079A (en) * 2012-08-24 2012-11-28 广东欧珀移动通信有限公司 Video previewing segment generating method of media player
CN103152654A (en) * 2013-03-15 2013-06-12 杭州智屏软件有限公司 Low-latency video fragment interception technology
CN104079981A (en) * 2013-03-25 2014-10-01 联想(北京)有限公司 Data processing method and data processing device
CN103414751A (en) * 2013-07-16 2013-11-27 广东工业大学 PC screen content sharing/interaction control method
CN103747362A (en) * 2013-12-30 2014-04-23 广州华多网络科技有限公司 Method and device for cutting out video clip
CN104159151A (en) * 2014-08-06 2014-11-19 哈尔滨工业大学深圳研究生院 Device and method for intercepting and processing of videos on OTT box
CN104159161A (en) * 2014-08-25 2014-11-19 广东欧珀移动通信有限公司 Video image frame location method and device
CN104618741A (en) * 2015-03-02 2015-05-13 浪潮软件集团有限公司 Information pushing system and method based on video content

Also Published As

Publication number Publication date
CN106412691A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106412691B (en) Video image intercepting method and device
CN106412702B (en) Video clip intercepting method and device
CN106412687B (en) Method and device for intercepting audio and video clips
CN109218731B (en) Screen projection method, device and system of mobile equipment
CN108322685B (en) Video frame insertion method, storage medium and terminal
US10638166B2 (en) Video sharing method and device, and video playing method and device
CN110636375B (en) Video stream processing method and device, terminal equipment and computer readable storage medium
CN106412681B (en) Live bullet screen video broadcasting method and device
CN109040643B (en) Mobile terminal and remote group photo method and device
WO2018082165A1 (en) Optical imaging method and apparatus
KR101899351B1 (en) Method and apparatus for performing video communication in a mobile terminal
EP3432588B1 (en) Method and system for processing image information
CN109361867B (en) Filter processing method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN109729384B (en) Video transcoding selection method and device
CN106844580B (en) Thumbnail generation method and device and mobile terminal
CN107948562B (en) Video recording method and video recording terminal
WO2018228241A1 (en) Image selection method and related product
CN109121008B (en) Video preview method, device, terminal and storage medium
CN108460769B (en) image processing method and terminal equipment
JP2021516919A (en) Video coding methods and their devices, storage media, equipment, and computer programs
US20120133678A1 (en) Apparatus and method for controlling screen conversion in portable terminal
CN108280817B (en) Image processing method and mobile terminal
CN108337533B (en) Video compression method and device
US20170123953A1 (en) Apparatus and method for controlling external device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221128

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right