CN110460894A - A kind of video image display method and terminal device - Google Patents

A kind of video image display method and terminal device Download PDF

Info

Publication number
CN110460894A
CN110460894A CN201910556512.9A CN201910556512A CN110460894A CN 110460894 A CN110460894 A CN 110460894A CN 201910556512 A CN201910556512 A CN 201910556512A CN 110460894 A CN110460894 A CN 110460894A
Authority
CN
China
Prior art keywords
video
frames
video images
input
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910556512.9A
Other languages
Chinese (zh)
Inventor
曹元�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910556512.9A priority Critical patent/CN110460894A/en
Publication of CN110460894A publication Critical patent/CN110460894A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a kind of video image display method and terminal devices, are related to field of communication technology, intercept the relatively low problem of the efficiency of the video image of user's needs from video to the prior art.This method comprises: receiving first input of the user to video playing interface during video playing interface plays video;In response to the first input, N frame video image is obtained according to predetermined manner, and show the N frame video image;Wherein, which is before receiving the first input, and the video image that video playing interface plays, N is positive integer.The program is applied particularly in the scene of interception video image.

Description

Video image display method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video image display method and terminal equipment.
Background
With the continuous development of terminal technology, the application of terminal equipment is more and more extensive. Among them, the video function has become one of the indispensable functions in the life and work of the user.
In the process of watching videos, users often need to save or share more wonderful video images (hereinafter referred to as video images needed by users) in the videos to friends. Currently, a user can capture a video image required by the user by using a screen capture function of a terminal device; or intercepting a video image required by a user through video editing software. However, for a video image with a faster playing speed, it is difficult to accurately capture the video image required by the user by using the screen capture function, and the operation process of capturing the video image required by the user by using the video editing software is tedious and time-consuming.
Thus, the prior art is inefficient in capturing the video image required by the user from the video.
Disclosure of Invention
The embodiment of the invention provides a video image display method and terminal equipment, and aims to solve the problem that the efficiency of intercepting a video image required by a user from a video is low in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video image display method, which is applied to a terminal device, and the method includes: receiving a first input of a user to a video playing interface in the process of playing a video on the video playing interface; responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes: the device comprises a receiving module and a processing module; the receiving module is used for receiving a first input of a user to the video playing interface in the process of playing a video on the video playing interface; the processing module is used for responding to the first input received by the receiving module, acquiring N frames of video images according to a preset mode and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the video image display method in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the video image display method as in the first aspect.
In the embodiment of the invention, the terminal equipment can receive the first input of a user to the video playing interface in the process of playing the video on the video playing interface; responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer. Through the scheme, the user can trigger the terminal equipment to acquire and display the N frames of video images before the first input through the first input on the video playing interface, and the user can quickly acquire the images required by the user. Therefore, the efficiency of intercepting the video image required by the user from the video can be improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a video image display method according to an embodiment of the present invention;
FIG. 3 is a second flowchart of a video image display method according to an embodiment of the present invention;
FIG. 4 is a third flowchart of a video image display method according to an embodiment of the present invention;
fig. 5 is a schematic view of an interface of a video image display method according to an embodiment of the present invention;
FIG. 6 is a fourth flowchart of a video image display method according to an embodiment of the present invention;
FIG. 7 is a second schematic view of an interface of a video image display method according to an embodiment of the present invention;
FIG. 8 is a fifth flowchart of a video image display method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 10 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides a video image display method.A terminal device can receive a first input of a user to a video playing interface in the process of playing a video on the video playing interface; responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer. Through the scheme, the user can trigger the terminal equipment to acquire and display the N frames of video images before the first input through the first input on the video playing interface, and the user can quickly acquire the images required by the user. Therefore, the efficiency of intercepting the video image required by the user from the video can be improved.
The following describes a software environment applied to the video image display method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the video image display method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the video image display method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can realize the video image display method provided by the embodiment of the invention by running the software program in the android operating system.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
It should be noted that: in the embodiment of the present invention, the type of the terminal device is not limited, for example, the terminal device may be a single-screen terminal device, a multi-screen terminal device (such as a double-screen terminal device, a triple-screen terminal device, and the like), and a flexible-screen terminal device (such as a folding-screen terminal device, a bending-screen terminal device, and the like), which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
The application scenario of the embodiment of the invention can be as follows: in the process that a user watches videos (live videos or non-live videos and the like) through video playing interfaces of various video applications or video playing interfaces of browsers, the scheme provided by the embodiment of the invention can improve the efficiency of obtaining video images (wonderful contents in the videos) required by the user from the videos, and further the user can store the obtained video images or share the obtained video images with friends.
The execution subject of the video image display method provided in the embodiment of the present invention may be the terminal device (including a mobile terminal device and a non-mobile terminal device), or may also be a functional module and/or a functional entity capable of implementing the method in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain a video image display method provided by the embodiment of the present invention.
Referring to fig. 2, an embodiment of the present invention provides a video image display method, which may include steps 201 to 202 described below.
Step 201, in the process of playing a video on the video playing interface, the terminal device receives a first input of a user on the video playing interface.
The first input may be a click operation of a user on the video playing interface, a slide operation of the user on the video playing interface, or other feasible operations of the user on the video playing interface, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the click operation may be any number of click operations, such as a single click operation, a double click operation, a triple click operation, and the like. The sliding operation may be a sliding operation in any direction, such as an upward sliding operation, a downward sliding operation, a leftward sliding operation, or a rightward sliding operation.
Illustratively, the first input may be a user input to a "cut video image" option on the video playback interface.
Step 202, responding to the first input, the terminal device acquires the N frames of video images according to a preset mode, and displays the N frames of video images.
The N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer. The specific value of N may be set according to actual use requirements, and the embodiment of the present invention is not limited.
When the player plays a video, the played video picture is actually composed of continuous multi-frame images, one frame of image is a single image picture of the minimum unit in the video animation, and one frame of image can be understood as one image.
It can be understood that, since the user sees the highlight video image first in the process of watching the video, then decides to intercept and store the highlight video image, and then executes the first input, when the terminal device receives the first input, the video image that the user needs to intercept has been played. Therefore, in the embodiment of the present invention, the terminal device obtains the N frames of video images that have been played by the video playing interface before receiving the first input, and the user can obtain the video image that the user needs to intercept from the N frames of video images. Moreover, the user can obtain the video image which needs to be intercepted by the user without adjusting the playing progress of the video, so that the operation efficiency can be improved, and the human-computer interaction performance can be improved.
For example, a video playing interface is playing a first video, and after receiving a first input, the terminal device may first obtain the first video resource and then statically acquire the N frames of video images from the first video resource.
Optionally, the preset mode may be to continuously acquire (i.e., acquire continuous N frames of video images), may also be to randomly acquire (i.e., acquire random N frames of video images), may also be to acquire at intervals (e.g., acquire N frames of video images at the same interval), and may also be in other modes, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
It is understood that the N frames of video images may be consecutive N frames of video images in the first video, so that since the video images are consecutive, the user can more accurately obtain the video images desired by the user. The N-frame video image may be a random N-frame video image. The N frames of video images can also be N frames of video images with the same interval in the first video, so that more video images containing different contents can be obtained, and a user can find a satisfactory video image from the video images. The specific method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, before the terminal device acquires the N frames of video images, it may be determined according to a user input whether the N frames of video images are consecutive or N frames of video images with the same interval.
The embodiment of the invention provides a video image display method.A terminal device can receive a first input of a user to a video playing interface in the process of playing a video on the video playing interface; responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer. Through the scheme, the user can trigger the terminal equipment to acquire and display the N frames of video images before the first input through the first input on the video playing interface, and the user can quickly acquire the images required by the user. Therefore, the efficiency of intercepting the video image required by the user from the video can be improved.
Optionally, after the step 202, the user may select at least one of the N frames of video images as needed, and store the at least one frame of video image in the terminal device, share the at least one frame of video image with a friend through an instant social application, or make a dynamic image.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, after step 202, the method for displaying video images according to the embodiment of the present invention may further include the following steps 203 to 204.
And step 203, the terminal equipment receives a second input of the user.
The second input may be a click input of the user on the video playing interface, a sliding operation of the user on the video playing interface, or other feasible operations of the user on the video playing interface, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the detailed description of the click input and the slide input may refer to the related description of the click input and the slide input in step 201, and will not be described herein again.
And step 204, responding to the second input, the terminal equipment stores the M frames of video images in the N frames of video images, or generates dynamic images from the M frames of video images in the N frames of video images.
Wherein, in a case where M frames of video images among the N frames of video images are saved in response to a second input, M is a positive integer less than or equal to N; the terminal device stores M frames of video images among the N frames of video images in a storage unit of the terminal device.
Illustratively, the second input is an input of a user selecting a "save" option in the video playback interface, and in response to the second input, the terminal device saves M of the N video images.
Wherein, in a case where M frames of video images among the N frames of video images are generated into a moving image in response to the second input, M is a positive integer greater than 1 and less than or equal to N. The terminal device may refer to any method for generating a dynamic image, and generate a dynamic image from M frames of video images in the N frames of video images, which is not described herein again.
Illustratively, the terminal device may employ a quantum (Quantizer) algorithm to generate a dynamic image from M frames of the N frames of video images.
The dynamic image can read out a plurality of images stored in a file one by one and display the images on a screen, and the simplest animation is formed.
For example, the format of the moving image may be a Graphics Interchange Format (GIF). The data of the GIF file is a continuous tone lossless compression format based on the LZW (Lempel-Ziv-Welch Encoding) algorithm. The data of the GIF file is compressed and a variable length equal compression algorithm is used. Another characteristic of the GIF format is that it can store a plurality of color images in one GIF file, and if a plurality of image data stored in one file are read out one by one and displayed on a screen, it can constitute a simplest animation.
Illustratively, the second input is an input for selecting a "generate dynamic image" option in the video playing interface by the user, and in response to the second input, the terminal device generates a dynamic image from M frames of the N frames of video images.
It should be noted that: the video image display method provided by the embodiment of the invention supports the function of updating the media library which is possessed by common file browsing applications, so that a user can view the stored M frames of video images or the generated dynamic images in the gallery application.
In the embodiment of the invention, a user can trigger the terminal equipment to store the M frames of video images in the N frames of video images or generate the M frames of video images in the N frames of video images into dynamic images according to the self requirement, thereby improving the man-machine interaction performance.
Optionally, before step 203, the user may trigger the terminal device to determine the M frames of video images by inputting.
Illustratively, in conjunction with fig. 3, as shown in fig. 4, before step 203, the method for displaying video images according to the embodiment of the present invention may further include the following steps 205 to 206.
Step 205, the terminal device receives a third input of the N frames of video images from the user.
The third input may be an input that the user selects the M-frame video image from the N-frame video images, or an input that the user deletes (N-M) -frame video images of the N-frame video images except the M-frame video image, or other feasible inputs, which is not limited in the embodiment of the present invention.
The third input may be a click input, a slide input, a drag input, and the like, and may be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
In response to the third input, the terminal device determines the M frames of video images from the N frames of video images, step 206.
In the embodiment of the invention, the user can trigger the terminal equipment to determine the M frames of video images from the N frames of video images according to the self requirement, thereby improving the user interaction performance.
Illustratively, N is 6 (in practical implementation, N is not limited to 6, and the user's input delay, frame rate of the video image, and the like are taken into consideration), when the user clicks the "capture video image" option in the video playing interface, the terminal device displays an image preview interface as shown in fig. 5. The image preview interface displays 6 frames of video images, namely a video image 1, a video image 2, a video image 3, a video image 4, a video image 5 and a video image 6 in sequence, and also displays a 'save' option and a 'generate GIF' option. And (3) selecting the video images 2, 3 and 4 and selecting the option of 'generating GIF', the terminal equipment generates GIF from the three frames of video images of the video images 2, 3 and 4.
When the value of N is large, the number of video images visible to the user in the image preview interface may be limited, so that the user may trigger the terminal device to update the video images visible to the user in the image preview interface by inputting a page through turning, or sliding the page up and down. And the user can trigger the terminal equipment to enlarge and rotate one frame of video image through the input (double-click input, double-finger input and the like) of the one frame of video image. The frame of video image can be enlarged to facilitate the user to carefully view the image details so as to make an accurate selection.
Optionally, if the user selects to generate a dynamic image from the M frames of video images, the terminal device determines first information (the first information is a frame rate or a frame interval time of the dynamic image) before generating the dynamic image, and then generates the dynamic image from the M frames of video images according to the first information.
Frame rate (Frame rate) is the frequency (rate) at which bitmap images called frames appear continuously on the display, and may also be referred to as Frame rate, Frame frequency, in hertz (Hz).
Frame interval time, the interval time between two adjacent frames (also called time interval), is another way to express the image refresh rate. Specifically, the time interval from the end of displaying the previous frame image to the end of displaying the next frame image may be a time interval between any two adjacent frames of images.
Illustratively, in conjunction with fig. 4, as shown in fig. 6, before step 204, the video image display method provided by the embodiment of the present invention may further include the following step 207; this step 204 can be specifically realized by the following step 204 a.
Step 207, in response to the second input, the terminal device determines the first information.
The first information is used to indicate a frame rate or a frame interval time of the dynamic image.
Optionally, the first information may be preset by the terminal device, or may be set by the user according to a requirement of the user, and specifically may be determined according to an actual use requirement, which is not limited in the embodiment of the present invention.
For example, if the first information is preset by the terminal device, the terminal device may determine a preset frame rate (or a pin interval time) stored in the terminal device as the first information. If the first information is set by the user, the terminal device may determine a frame rate (or a pin interval time) input by the user as the first information.
And step 204a, the terminal equipment generates a dynamic image from the M frames of video images according to the first information.
The terminal device generates the M frames of video images into dynamic images according to the frame rate (or frame interval time) indicated by the first information.
In the embodiment of the invention, the terminal equipment can generate the dynamic image from the M frames of video images according to the proper frame rate or frame interval time, thereby generating the more satisfactory dynamic image for the user and improving the man-machine interaction performance.
Illustratively, step 207 may be specifically implemented by steps 207a-207c described below.
Step 207a, in response to the second input, the terminal device displays the first indication information.
The first indication information is used for indicating a user to determine the first information.
Optionally, the first indication information may be an input control, an interface with multiple options, or other feasible contents, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
And step 207b, the terminal device receives a fourth input of the first indication information by the user.
The fourth input may be a click input, a slide input, and the like of the user on the first indication information, and may be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
For example, for the specific description of the click input, the slide input, and the like, reference may be made to the description of the click input, the slide input, and the like in step 201, and details are not described herein again.
In response to the fourth input, the terminal device determines the first information, step 207 c.
In response to the fourth input, the terminal device determines a frame rate corresponding to the fourth input as the first information, or in response to the fourth input, the terminal device determines a frame interval time corresponding to the fourth input as the first information.
Illustratively, as shown in (a) of fig. 7, the first indication information is an input control for indicating the user to input the first information in the input control. The fourth input is an input of "24 frames" entered in the input control by the user, and in response to the fourth input, the terminal device determines "24 frames" as the first information.
As still another example, as shown in (b) of fig. 7, the first indication information is a frame rate selection interface, which is used to indicate a user to select one frame rate from a plurality of (4 are illustrated in the figure, but actually not limited to 4) frame rate options displayed in the frame rate selection interface. The fourth input is an input that the user selects the "24 frames" option in the frame rate selection interface, and in response to the fourth input, the terminal device determines "24 frames" as the first information.
As still another example, as shown in (c) of fig. 7, the first indication information is a frame interval time selection interface, which is used to indicate the user to select one frame interval time from a plurality of (4 are illustrated in the figure, but actually not limited to 4) frame interval time options displayed on the frame interval time selection interface. The fourth input is an input that the user selects the "33 msec" option in the inter-frame time selection interface, and in response to the fourth input, the terminal device determines "33 msec" as the first information.
In the embodiment of the invention, the user can set the frame rate or the frame interval time according to the self requirement, thereby generating a more satisfactory dynamic image of the user and improving the man-machine interaction performance.
Optionally, before the terminal device obtains the N frames of video images according to the preset mode and displays the N frames of video images, a first position of the N frames of video images in a first video may be determined, where the first video is a video played on the video playing interface.
Illustratively, in conjunction with fig. 6, as shown in fig. 8, before step 202, the method for displaying a video image according to the embodiment of the present invention may further include step 208, which is described below, and step 202 may specifically be implemented by step 202a, which is described below.
In response to the first input, the terminal device determines 208 second information.
The second information is used for indicating a first position of the N frames of video images in the first video. When the video is played, one video corresponds to one time axis, and each frame of video image in the video corresponds to one time point on the time axis. Therefore, the second information is a playing time period corresponding to the N frames of video images (hereinafter referred to as a first time period, which is a time period for playing the N frames of video images during the process of playing the first video).
Optionally, the second information may be preset by the terminal device. For example, the terminal device may preset at least two of the duration, the starting time point, the middle time point, and the ending time point of the first time period, or the user may set any one of the starting time point, the middle time point, and the ending time point of the first time period and the value of N.
Illustratively, the terminal device determines 3s before receiving the first input as the first time period, that is, the terminal device presets that the video image played within 3s before receiving the first input is determined as the N frames of video images.
Optionally, the second information may also be determined by the terminal device according to a user input. For example, the user may set at least two of the duration, the starting time point, the middle time point, and the ending time point of the first time period by inputting, or the user may set any one of the starting time point, the middle time point, and the ending time point of the first time period and the value of N by inputting.
Illustratively, the user sets the starting time point of the first time period to be a time point 5s before the terminal device receives the first input, and the ending time point of the first time period to be a time point 2s before the terminal device receives the first input.
Optionally, the second information may also be set by the terminal device according to preset information and user input, for example, the terminal device presets one of a duration, a starting time point, an intermediate time point, and an ending time point of the first time period, and the user sets another one of the duration, the starting time point, the intermediate time point, and the ending time point of the first time period by inputting; the terminal device presets any one of a starting time point, a middle time point and an ending time point of a first time period, and a user sets a value of N by inputting; the terminal device presets the value of N, and a user sets any one of a starting time point, a middle time point and an ending time point of a first time period by inputting; the method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Illustratively, the terminal device sets the starting time of the first time period to be a time point 1s before the first input is received in advance, and the user sets the duration of the first time period to be 5s through input.
Step 202a, the terminal device obtains N frames of video images from the first video according to the second information in a preset manner, and displays the N frames of video images.
After the terminal device determines the second information, the terminal device may statically acquire the N frames of video images located at the first position from the first video resource according to the second information.
In the embodiment of the invention, the terminal equipment can determine the second information according to the actual use requirement, and acquire and display the N frames of video images according to the second information.
Further optionally, the second information is determined according to user input, or the second information is set according to user input and preset information in the terminal device.
Illustratively, the step 208 can be specifically realized by the following steps 208a to 208 c.
And step 208a, responding to the first input, and displaying second indication information by the terminal equipment.
The second indication information is used to indicate that the second information is determined.
The second indication information may be an input control, an interface with multiple options, or other feasible contents, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
And step 208b, the terminal device receives a fifth input of the second indication information by the user.
The fifth input may be a click input, a slide input, and the like of the user on the second indication information, which may be determined according to actual usage requirements, and the embodiment of the present invention is not limited.
For example, for the specific description of the click input, the slide input, and the like, reference may be made to the description of the click input, the slide input, and the like in step 201, and details are not described herein again.
In response to the fifth input, the terminal device determines the second information, step 208 c.
In response to a fifth input, the terminal device determines second information corresponding to the fifth input.
In the embodiment of the invention, the user can set the second information by inputting according to the use requirement of the user, so that N frames of video images which are more satisfactory to the user can be obtained.
Optionally, the video playing interface and the N frames of video images may be displayed on the same screen of the terminal device (at this time, the type of the terminal device is not limited), the video playing interface and the N frames of video images may also be displayed on the same screen of the terminal device in a split screen manner (at this time, the type of the terminal device is not limited), the video playing interface and the N frames of video images may also be displayed on different screens of the terminal device (at this time, the terminal device is a folding screen or a multi-screen terminal device), which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For example, if the video playing interface and the N frames of video images are displayed on the same screen of the terminal device, the step 202 may be specifically implemented by the following step 202 b.
Step 202b, in response to the first input, the terminal device controls the video playing interface to stop playing the video, and obtains N frames of video images according to a preset mode; and displaying the N frames of video images in the first interface, wherein the first interface is displayed on the video playing interface in an overlapping manner.
The superimposed display may be that the first interface covers the whole area of the video playing interface, may also be that the first interface covers a part of the area of the video playing interface, and may also be that the first interface is displayed on the video playing interface in a floating manner, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited.
For another example, if the video playback interface and the N frames of video images are displayed on the same screen of the terminal device in a split-screen manner (i.e., the first area and the second area are different areas on the same screen of the terminal device), the video playback interface is displayed on the first screen before the first input is received, and the terminal device displays the video playback interface and the N frames of video images on the first screen in a split-screen manner in response to the received first input. Thus, the step 202 can be specifically realized by the following step 202 c.
Step 202c, in response to the first input, the terminal device acquires N frames of video images according to a preset mode, displays the N frames of video images on the first area, and displays a video playing interface on the second area.
Before receiving the first input, displaying a video playing interface on a first screen (at the moment, no split screen exists), receiving the first input by the terminal equipment, responding to the first input, dividing the first screen into two areas (namely, two screens) by the terminal equipment, acquiring N frames of video images by the terminal equipment, displaying the N frames of video images on the first area, and displaying the video playing interface on the second area.
As another example, if the video playback interface and the N frames of video images are displayed on different screens of the terminal device (i.e., the first area and the second area are areas on different screens of the terminal device), the video playback interface is displayed on the first screen (i.e., the second area) before the first input is received, and the video playback interface is still displayed on the first screen in response to the received first input, and the N frames of video images are displayed on the second screen (i.e., the first area). The step 202 can be specifically realized by the following step 202 d.
Step 202d, in response to the first input, the terminal device acquires N frames of video images according to a preset mode, and displays the N frames of video images on the first area.
And the video playing interface is displayed on the second area.
Before receiving the first input, displaying a video playing interface on a first screen, receiving the first input by the terminal equipment, responding to the first input, acquiring N frames of video images by the terminal equipment, and displaying the N frames of video images on a second screen. Wherein the second screen and the first screen are different screens on the terminal device.
It should be noted that: for step 202c or 202d, the video playing interface may be currently (after receiving the first input) in a state of stopping playing the video (i.e., displaying the pause playing interface), or may still be in a video playing state. If the video playing interface is still in the video playing state at present, the user can trigger the terminal device to stop playing the video according to the input. The first screen may be a main screen of the terminal device or an auxiliary screen of the terminal device, which is not limited in the embodiment of the present invention. The second screen may be a main screen of the terminal device or an auxiliary screen of the terminal device, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, a plurality of display modes of the video playing interface and the first interface are provided, more possibilities are provided for users, and the man-machine interaction performance can be improved.
The drawings in the embodiments of the present invention are all exemplified by the drawings in the independent embodiments, and when the embodiments of the present invention are specifically implemented, each of the drawings can also be implemented by combining any other drawings which can be combined, and the embodiments of the present invention are not limited. For example, step 207 and step 204a described above may also be combined with fig. 4.
As shown in fig. 9, an embodiment of the present invention provides a terminal device 120, where the terminal device 120 includes: a receiving module 121 and a processing module 122; the receiving module 121 is configured to receive a first input of a user to the video playing interface in a process of playing a video on the video playing interface; a processing module 122, configured to, in response to the first input received by the receiving module 121, obtain N frames of video images according to a preset manner, and display the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer.
Optionally, the receiving module 121 is further configured to receive a second input from the user after the processing module 122 displays the N frames of video images; the processing module 122 is further configured to, in response to the second input received by the receiving module 121, save M frames of video images in the N frames of video images, or generate a dynamic image from M frames of video images in the N frames of video images; wherein, in a case where M frames of video images among the N frames of video images are saved in response to the second input, M is a positive integer less than or equal to N; in a case where M frames of video images among the N frames of video images are generated into a moving image in response to the second input, M is a positive integer greater than 1 and less than or equal to N.
Optionally, the receiving module 121 is further configured to receive a third input of the N frames of video images from the user before receiving the second input of the user; the processing module 122 is further configured to determine M frames of video images from the N frames of video images in response to the third input received by the receiving module 121.
Optionally, the processing module 122 is further configured to determine first information before generating a dynamic image from M frames of video images among the N frames of video images, where the first information is used to indicate a frame rate or a frame interval time of the dynamic image; the processing module 122 is specifically configured to generate a dynamic image from the M frames of video images according to the first information.
Optionally, the processing module 122 is specifically configured to display first indication information, where the first indication information is used to indicate that the first information is determined; a receiving module 121, configured to receive a fourth input of the first indication information displayed by the processing module 122 from the user; the processing module 122 is further configured to determine the first information in response to the fourth input received by the receiving module 121.
Optionally, the processing module 122 is further configured to determine second information before the N frames of video images are acquired according to the preset mode, where the second information is used to indicate a first position of the N frames of video images in a first video, and the first video is a video played by the video playing interface; the processing module 122 is specifically configured to obtain N frames of video images from the first video according to the second information in a preset manner.
Optionally, the processing module 122 is specifically configured to display second indication information, where the second indication information is used to indicate to determine second information; a receiving module 121, configured to receive a fifth input of the second indication information displayed by the processing module 122 from the user; the processing module 122 is further configured to determine the second information in response to the fifth input received by the receiving module 121.
Optionally, the processing module 122 is specifically configured to respond to the first input received by the receiving module 121, control the video playing interface to stop playing the video, and obtain N frames of video images according to a preset manner; displaying N frames of video images in a first interface, wherein the first interface is displayed on a video playing interface in an overlapping mode; or, the processing module 122 is specifically configured to, in response to the first input received by the receiving module 121, obtain N frames of video images according to a preset manner, and display the N frames of video images on the first area; the video playing interface is displayed on the second area; the first area and the second area are different areas on the same screen of the terminal device, or the first area and the second area are different areas on different screens of the terminal device.
The terminal device provided in the embodiment of the present invention can implement each process shown in any one of fig. 2 to fig. 8 in the above method embodiment, and details are not described here again to avoid repetition.
The embodiment of the invention provides a terminal device, which can receive a first input of a user to a video playing interface in the process of playing a video on the video playing interface; responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer. Through the scheme, the user can trigger the terminal equipment to acquire and display the N frames of video images before the first input through the first input on the video playing interface, and the user can quickly acquire the images required by the user. Therefore, the efficiency of intercepting the video image required by the user from the video can be improved.
Fig. 10 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention. As shown in fig. 10, the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 10 is not intended to be limiting, and that terminal devices may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The user input unit 107 is configured to receive a first input of a user to the video playing interface in a process of playing a video on the video playing interface; the processor 110 is configured to respond to a first input, acquire N frames of video images according to a preset manner, and display the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer.
According to the terminal device provided by the embodiment of the invention, the terminal device can receive the first input of a user to the video playing interface in the process of playing the video on the video playing interface; responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images; the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer. Through the scheme, the user can trigger the terminal equipment to acquire and display the N frames of video images before the first input through the first input on the video playing interface, and the user can quickly acquire the images required by the user. Therefore, the efficiency of intercepting the video image required by the user from the video can be improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which may include the processor 110 shown in fig. 10, the memory 109, and a computer program stored in the memory 109 and capable of being executed on the processor 110, where the computer program, when executed by the processor 110, implements each process of the video image display method shown in any one of fig. 2 to fig. 8 in the foregoing method embodiments, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video image display method shown in any one of fig. 2 to 8 in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A video image display method is applied to a terminal device, and comprises the following steps:
receiving a first input of a user to a video playing interface in the process of playing a video on the video playing interface;
responding to the first input, acquiring N frames of video images according to a preset mode, and displaying the N frames of video images;
the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer.
2. The method according to claim 1, wherein after displaying the N frames of video images, further comprising:
receiving a second input of the user;
responding to the second input, saving M frames of video images in the N frames of video images, or generating dynamic images from M frames of video images in the N frames of video images;
wherein, in a case where M of the N frames of video images are saved in response to the second input, M is a positive integer less than or equal to N; in a case where M frames of the N frames of video images are generated into a dynamic image in response to the second input, M is a positive integer greater than 1 and less than or equal to N.
3. The method according to claim 2, wherein before generating the dynamic image from the M frames of the N frames of video images, further comprising:
determining first information indicating a frame rate or a frame interval time of the dynamic image;
the generating a dynamic image from M frames of video images of the N frames of video images includes:
and generating the dynamic image from the M frames of video images according to the first information.
4. The method according to claim 1, wherein before acquiring the N frames of video images according to the predetermined manner, further comprising:
determining second information, wherein the second information is used for indicating a first position of the N frames of video images in a first video, and the first video is a video played by the video playing interface;
the acquiring of the N frames of video images according to the preset mode includes:
and acquiring the N frames of video images from the first video according to a preset mode according to the second information.
5. The method according to any one of claims 1 to 4, wherein the acquiring N frames of video images in a preset manner in response to the first input and displaying the N frames of video images comprises:
responding to the first input, controlling the video playing interface to stop playing the video, and acquiring the N frames of video images according to the preset mode; displaying the N frames of video images in a first interface, wherein the first interface is displayed on the video playing interface in an overlapping mode;
or,
responding to the first input, acquiring the N frames of video images according to the preset mode, and displaying the N frames of video images on a first area; the video playing interface is displayed on the second area; the first area and the second area are different areas on the same screen of the terminal device, or the first area and the second area are areas on different screens of the terminal device.
6. A terminal device, characterized in that the terminal device comprises: the device comprises a receiving module and a processing module;
the receiving module is used for receiving a first input of a user to the video playing interface in the process of playing a video on the video playing interface;
the processing module is used for responding to the first input received by the receiving module, acquiring N frames of video images according to a preset mode and displaying the N frames of video images;
the N frames of video images are video images played by the video playing interface before the first input is received, and N is a positive integer.
7. The terminal device according to claim 6, wherein the receiving module is further configured to receive a second input from the user after the processing module displays the N frames of video images;
the processing module is further configured to save M frames of video images of the N frames of video images or generate a dynamic image from M frames of video images of the N frames of video images in response to the second input received by the receiving module;
wherein, in a case where M of the N frames of video images are saved in response to the second input, M is a positive integer less than or equal to N; in a case where M frames of the N frames of video images are generated into a dynamic image in response to the second input, M is a positive integer greater than 1 and less than or equal to N.
8. The terminal device according to claim 7, wherein the processing module is further configured to determine first information, before the generating a dynamic image from M frames of the N frames of video images, the first information indicating a frame rate or a frame interval time of the dynamic image; the processing module is specifically configured to generate the dynamic image from the M-frame video image according to the first information.
9. The terminal device according to claim 6, wherein the processing module is further configured to determine second information before the N frames of video images are obtained according to the preset manner, where the second information is used to indicate a first position of the N frames of video images in a first video, and the first video is a video played by the video playing interface; the processing module is specifically configured to obtain the N frames of video images from the first video according to a preset mode according to the second information.
10. The terminal device according to any one of claims 6 to 9, wherein the processing module is specifically configured to, in response to the first input received by the receiving module, control the video playing interface to stop playing a video, and obtain the N frames of video images according to the preset manner; displaying the N frames of video images in a first interface, wherein the first interface is displayed on the video playing interface in an overlapping mode; or, the processing module is specifically configured to, in response to the first input received by the receiving module, obtain the N frames of video images according to the preset manner, and display the N frames of video images on a first area; the video playing interface is displayed on the second area; the first area and the second area are different areas on the same screen of the terminal device, or the first area and the second area are areas on different screens of the terminal device.
11. A terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video image display method according to any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video image display method according to any one of claims 1 to 5.
CN201910556512.9A 2019-06-25 2019-06-25 A kind of video image display method and terminal device Pending CN110460894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910556512.9A CN110460894A (en) 2019-06-25 2019-06-25 A kind of video image display method and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910556512.9A CN110460894A (en) 2019-06-25 2019-06-25 A kind of video image display method and terminal device

Publications (1)

Publication Number Publication Date
CN110460894A true CN110460894A (en) 2019-11-15

Family

ID=68480874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910556512.9A Pending CN110460894A (en) 2019-06-25 2019-06-25 A kind of video image display method and terminal device

Country Status (1)

Country Link
CN (1) CN110460894A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111641861A (en) * 2020-05-27 2020-09-08 维沃移动通信有限公司 Video playing method and electronic equipment
CN111669638A (en) * 2020-02-28 2020-09-15 海信视像科技股份有限公司 Video rotation playing method and display equipment
CN113747223A (en) * 2020-05-29 2021-12-03 口碑(上海)信息技术有限公司 Video comment method and device and electronic equipment
CN115379294A (en) * 2022-08-15 2022-11-22 北京达佳互联信息技术有限公司 Image capturing method and device, electronic equipment and storage medium
WO2023160271A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Human-machine interaction method and apparatus, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040214541A1 (en) * 2003-04-22 2004-10-28 Taek-Kyun Choi Apparatus and method for transmitting a television signal received in a mobile communication terminal
CN101510313A (en) * 2009-03-13 2009-08-19 腾讯科技(深圳)有限公司 Method, system and medium player for generating GIF
CN104010207A (en) * 2013-02-27 2014-08-27 联想(北京)有限公司 Data processing method, controlled equipment and control equipment
CN105469381A (en) * 2014-09-11 2016-04-06 腾讯科技(深圳)有限公司 Information processing method and terminal
CN105872675A (en) * 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for intercepting video animation
CN109618224A (en) * 2018-12-18 2019-04-12 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer readable storage medium and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040214541A1 (en) * 2003-04-22 2004-10-28 Taek-Kyun Choi Apparatus and method for transmitting a television signal received in a mobile communication terminal
CN101510313A (en) * 2009-03-13 2009-08-19 腾讯科技(深圳)有限公司 Method, system and medium player for generating GIF
CN104010207A (en) * 2013-02-27 2014-08-27 联想(北京)有限公司 Data processing method, controlled equipment and control equipment
CN105469381A (en) * 2014-09-11 2016-04-06 腾讯科技(深圳)有限公司 Information processing method and terminal
CN105872675A (en) * 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for intercepting video animation
CN109618224A (en) * 2018-12-18 2019-04-12 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer readable storage medium and equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669638A (en) * 2020-02-28 2020-09-15 海信视像科技股份有限公司 Video rotation playing method and display equipment
CN111669638B (en) * 2020-02-28 2022-07-15 海信视像科技股份有限公司 Video rotation playing method and display device
CN111641861A (en) * 2020-05-27 2020-09-08 维沃移动通信有限公司 Video playing method and electronic equipment
CN113747223A (en) * 2020-05-29 2021-12-03 口碑(上海)信息技术有限公司 Video comment method and device and electronic equipment
CN113747223B (en) * 2020-05-29 2023-11-21 口碑(上海)信息技术有限公司 Video comment method and device and electronic equipment
WO2023160271A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Human-machine interaction method and apparatus, and electronic device
CN115379294A (en) * 2022-08-15 2022-11-22 北京达佳互联信息技术有限公司 Image capturing method and device, electronic equipment and storage medium
CN115379294B (en) * 2022-08-15 2023-10-03 北京达佳互联信息技术有限公司 Image capturing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111061574B (en) Object sharing method and electronic device
CN107977144B (en) Screen capture processing method and mobile terminal
CN109002243B (en) Image parameter adjusting method and terminal equipment
CN109992231B (en) Screen projection method and terminal
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN109525874B (en) Screen capturing method and terminal equipment
CN110460894A (en) A kind of video image display method and terminal device
CN111142991A (en) Application function page display method and electronic equipment
CN109032486B (en) Display control method and terminal equipment
CN109710349B (en) Screen capturing method and mobile terminal
CN109922265B (en) Video shooting method and terminal equipment
CN110099296B (en) Information display method and terminal equipment
CN111010523B (en) Video recording method and electronic equipment
CN109558046B (en) Information display method and terminal equipment
CN110868633A (en) Video processing method and electronic equipment
CN109408072B (en) Application program deleting method and terminal equipment
CN110865745A (en) Screen capturing method and terminal equipment
CN110225180B (en) Content input method and terminal equipment
CN110990172A (en) Application sharing method, first electronic device and computer-readable storage medium
CN108804628B (en) Picture display method and terminal
CN111399715B (en) Interface display method and electronic equipment
CN111049977B (en) Alarm clock reminding method and electronic equipment
CN110941378B (en) Video content display method and electronic equipment
CN110022445B (en) Content output method and terminal equipment
CN110209324B (en) Display method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191115