CN112565868B - Video playing method and device and electronic equipment - Google Patents

Video playing method and device and electronic equipment Download PDF

Info

Publication number
CN112565868B
CN112565868B CN202011408989.1A CN202011408989A CN112565868B CN 112565868 B CN112565868 B CN 112565868B CN 202011408989 A CN202011408989 A CN 202011408989A CN 112565868 B CN112565868 B CN 112565868B
Authority
CN
China
Prior art keywords
image
video
target
playing
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011408989.1A
Other languages
Chinese (zh)
Other versions
CN112565868A (en
Inventor
邱靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011408989.1A priority Critical patent/CN112565868B/en
Publication of CN112565868A publication Critical patent/CN112565868A/en
Priority to PCT/CN2021/134313 priority patent/WO2022116962A1/en
Application granted granted Critical
Publication of CN112565868B publication Critical patent/CN112565868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data

Abstract

The application discloses a video playing method, a video playing device and electronic equipment, which belong to the technical field of video processing, wherein the method comprises the following steps: receiving a first input to a first object in a case where a first video is played; the first object is a first image in the first video, or the first object is associated with the first image in the first video; in response to the first input, inserting at least one frame of third image between the first image and the second image, and playing the first video after the third image is inserted; wherein the content of the third image is related to the content of the first image, and the second image comprises: at least one of a frame image preceding the first image and a frame image following the first image in the first video. According to the method and the device, the played first video can present more details through real-time frame insertion processing, and on the basis of avoiding the occupation space of excessive video data, the details can be presented during video playing.

Description

Video playing method and device and electronic equipment
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video playing method and device and electronic equipment.
Background
At present, most electronic devices have a function of playing or recording videos, but for electronic devices such as mobile phones and the like, from the consideration of saving space and the like, when recording videos, the number of frames contained in a unit time of a video obtained by general recording is small, so that too many details cannot be restored when the video is played slowly. If a high-speed camera is provided in a mobile phone, this problem can be avoided, but the cost is high. Therefore, on the basis of avoiding excessive occupied space of the video, how to realize detail presentation during video playing is a problem to be solved at present.
Disclosure of Invention
The embodiment of the application aims to provide a video playing method, a video playing device and electronic equipment, and the video playing method, the video playing device and the electronic equipment can solve the problem that details cannot be presented when a video is played on the basis of avoiding excessive occupied space of the video.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video playing method, including:
receiving a first input to a first object in a case where a first video is played; wherein the first object is a first image in the first video data, or the first object is associated with the first image;
in response to the first input, inserting at least one frame of third image between the first image and the second image, and playing the first video after the third image is inserted;
wherein the content of the third image is related to the content of the first image, the second image comprising: at least one of a frame of image in the first video that precedes the first image and a frame of image that follows the first image.
In a second aspect, an embodiment of the present application provides a video playing apparatus, including:
the receiving module is used for receiving a first input of a first object under the condition of playing a first video; wherein the first object is a first image in the first video, or the first object is associated with a first image in the first video;
the response module is used for responding to the first input, inserting at least one frame of third image between the first image and the second image, and playing the first video after the third image is inserted;
wherein the content of the third image is related to the content of the first image, and the second image comprises: at least one of a frame of image in the first video that precedes the first image and a frame of image that follows the first image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the video playing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the steps of the video playing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the video playing method according to the first aspect.
In the embodiment of the application, under the condition of playing a first video, if a first input is received, at least one frame of third image is inserted between a first image and a second image in the first video, so that the content of the first video is enriched because the content of the inserted third image is related to the content of the first image, so that when the first video after the third image is played, a user can watch more details, and on the basis of avoiding excessive occupied space of video data, the detail presentation during the video playing is realized.
Drawings
Fig. 1 is a flowchart of a video playing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interpolation process according to an embodiment of the present application;
FIG. 3 is a diagram of a slow motion picture according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a second target image of an embodiment of the present application;
FIG. 5 is a schematic diagram of a third target image of an embodiment of the present application;
FIG. 6 is a graph illustrating a variation of pressure values of a first input according to an embodiment of the present application;
FIG. 7 is a second flowchart of a video playing method according to an embodiment of the present application;
fig. 8 is a block diagram of a video playback device according to an embodiment of the present application;
FIG. 9 is a block diagram of an electronic device of an embodiment of the application;
fig. 10 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes the video playing method provided in the embodiment of the present application in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a video playing method, which may specifically include the following steps:
step 11: in the case of playing a first video, a first input is received for a first object.
Wherein the first object is a first image in the first video, or the first object is associated with the first image in the first video.
Optionally, the first video may be a part of a complete video, such as the images of the complete video are: a number of frames of images containing a objects, a number of frames of images containing B objects, etc., wherein the images of the first video may be the number of frames of images containing a objects.
Optionally, the first video may also be a complete video, for example, the video may be a short video, a gif image, or the like, which is not limited in this embodiment of the application.
Optionally, the first object is a first image in the first video, that is, the first input may be a first input for a currently played first image in the process of playing the first video. The first input is not limited in form, for example, the first input may be a press input, a continuous click input, a sliding input with a predetermined trajectory, or the like.
Optionally, the first object may be a progress bar for video playing, where the first object is associated with a first image in the first video, that is, the first input may be a long-press operation for a corresponding area associated with the first image in the progress bar, where the long-press operation refers to an operation in which a press duration exceeds a preset threshold, and of course, other press operations may also be used, which is not limited in this embodiment of the present application.
Optionally, the first object may be a target object in an image currently displayed in a first video, the first object being associated with the first image, that is, the first input may be a first input for the image currently displayed in the first video, a target object in the image currently displayed is extracted according to the first input (for example, the target object may be an object corresponding to an operation position of the first input), and an image located before the image currently displayed and including the target object is determined as the first image; alternatively, when there is no image including the target object between the currently displayed images, the currently displayed image is determined as the first image.
Step 12: and responding to the first input, inserting at least one frame of third image between the first image and the second image, and playing the first video after the third image is inserted.
Wherein the content of the third image is related to the content of the first image, the second image comprising: at least one of a frame of image in the first video that precedes the first image and a frame of image that follows the first image.
Optionally, in response to the first input, at least one frame of a third image may be inserted between a first image and a second image subsequent to the first image; alternatively, in response to the first input, at least one frame of a third image may be inserted between a first image and a second image preceding the first image; alternatively, in response to the first input, at least one frame of a third image may be inserted between two frames of second images, wherein the first image is located between the two frames of second images.
Optionally, the contents of the first image and the second image are related, that is, the first image and the second image both contain the target object; the third image is related to the content of the first image, namely the first image and the third image both contain the target object; or the contents of the first image, the second image and the third image are all related, namely the first image, the second image and the third image all contain the target object.
Optionally, the step of playing the first video data into which the third image is inserted may specifically include:
playing the first video data after the third image is inserted from the second image under the condition that the second image is positioned in front of the first image;
and playing the first video data after the third image is inserted from the first image under the condition that the second image is positioned after the first image.
For example: and under the condition that the first video is a section of complete video, playing the first video inserted with the third image, namely playing the complete video inserted with the third image.
Another example is: in the case that the first video is a part of a complete video, playing the first video after inserting the third image may include: playing the complete video after the third image is inserted; or only playing the corresponding first video segment after the third image is inserted, for example, when at least one frame of third image is inserted between the first image and the second image after the first image, playing is started from the first image and ended from the second image; if at least one frame of third image is inserted between a first image and a second image before the first image, playing is started from the second image to the end of the first image; for example, in the case where a first image is located between two frames of second images and at least one frame of third image is inserted between the two frames of second images, playback is started from a second image located before the first image to a second image located after the first image.
Optionally, when the first video with the third image inserted therein is played, a popup window may be displayed in a playing window for playing the first video, and the first video with the third image inserted therein is played in the popup window, that is, the first video after frame insertion is previewed in a popup window manner, that is, playing the first video with the third image inserted therein in the present application may be understood as a manner of playing frame insertion in real time for the first video in a playing process, so that occupation of a memory space of the electronic device may be reduced.
For example: in the process of playing the first video, if a user needs to view a section or a certain area of the corresponding detail of the user, the user can press the corresponding video area for a long time, so that the electronic equipment can automatically and intelligently perform real-time frame insertion processing on the video area under the condition of receiving user input, and the video area can display more details and smoothly and continuously slow motion during playing.
Optionally, in response to the first input, after inserting at least one frame of a third image between the first image and the second image and playing the first video after inserting the third image, the method may further include: and displaying prompt information under the condition that the first input is finished, wherein the prompt information is used for prompting a user whether to save the first video after the frame insertion.
For example: in the input process of the first input (such as long press input), the first video is subjected to real-time frame insertion processing, and when the first input is finished (such as an operation object of the first input is separated from the electronic device), whether the first video after frame insertion processing needs to be stored is prompted to a user in a mode of displaying prompt information, so that under the condition of reducing the occupation of memory space, the user can also be provided with a choice for storing the first video after frame insertion processing, and the practical effect is further ensured.
Optionally, when the first input is finished, the first video is restored to the initial playing state (i.e., the playing state before the frame processing is inserted), so that a user can meet a requirement for processing the played video in real time in any process of playing the video, occupation of a memory space is reduced, operation is simple and convenient, and difficulty in processing a professional video is reduced.
In the above scheme of the application, under the condition of playing the first video, if a first input is received, at least one frame of third image is inserted between the first image and the second image in the first video, so that the content of the first video is enriched because the content of the inserted third image is related to the content of the first image, so that when the first video inserted after the third image is played, a user can watch more details, and on the basis of avoiding excessive occupied space of video data, the detail presentation during video playing is realized.
Optionally, the step of inserting at least one frame of third image between the first image and the second image may specifically include:
copying the first target image to obtain at least one frame of third image; wherein the first target image is: a part or all of the image from the first image to the second image;
inserting the third image to a position adjacent to a first target image corresponding to the third image.
For example: in the process of playing the first video, the front frame image and the rear frame image of the video can be read in real time, artificial Intelligence (AI) learning is carried out to obtain repeated images, and the repeated images obtained by learning are inserted into the front frame and the rear frame in a certain proportion, so that the played video data has more details and is clearer and smoother, and the slow motion image can be intuitively and quickly obtained.
The following description is given by taking an example in which each frame of image in the first video includes a target object:
for example: and automatically inserting repeated images learned by AI into the adjacent positions of each frame of image in the first video while playing the first video. As shown in fig. 2 (a), a blank rectangular region represents one frame of image in the initial video, and after AI learning, a frame of repeated image is inserted at the adjacent position of each frame of initial image, as shown by the rectangular region filled with horizontal lines in fig. 2 (b); of course, depending on the number of frames in the initial video, the degree of detail of slow motion that needs to be obtained, etc., it is also possible to insert multiple frames of repeated images at positions adjacent to each frame of the initial image, as shown by the rectangular areas filled by the horizontal lines and the rectangular areas filled by the grids in fig. 2 (c).
Another example is: while playing the first video, a repeated image learned by AI may be automatically inserted at a position adjacent to a part of frame images in the first video, for example, the first video includes 10 frame images, one or more frames of repeated images learned by AI may be inserted at a position adjacent to 1 st, 3 rd, 5 th, 7 th, 9 th frame images, or one or more frames of repeated images learned by AI may also be inserted at a position where 3 rd, 4 th, 5 th, 6 th, 7 th frame images ring, and the like, which is not limited in this embodiment of the present application.
Optionally, the step of inserting at least one frame of third image between the first image and the second image may specifically include:
generating a third image according to the contents of the second target image and the third target image; wherein the content of the third image is different from the content of the second target image and the third target image, and the second target image and the third target image are: any two adjacent frame images from the first image to the second image;
inserting the third image between the second target image and the third target image.
The adjacent second target image and the third target image both include the same target object (here, the same target object refers to the same object, and its motion, position, and shape may be the same or different), that is, it means that the contents of the second target image and the third target image are related. For example: the target object in the second target image may be the object a in fig. 3, the target object in the second target image may be the object E in fig. 3, and both the directional positions may be different, but both may be "ski objects", and the target objects may be determined to be the same target object, or the movement, position, shape, and the like of the target objects included in the second target image and the third target image may be the same.
The content of the third image is different from the content of the second target image and the third target image, which may mean that the third image contains the same target object as the second target image and the third target image, but the content of the target object is different (for example, at least one of the position, the motion, the shape, and the like of the target object may be different).
The principle of the frame interpolation process in this embodiment is: and performing frame interpolation in real time based on motion estimation and motion compensation algorithms. In short, the video real-time frame interpolation technology can estimate the motion track of an object according to the relationship between two adjacent frames, and interpolate one or more frames of intermediate images to improve the video frame rate so as to ensure that the video playing picture is smoother and the motion details are more clear.
For example: the frame rate of the initial video is 24 frames, when the high refresh rate is faced, the discontinuous and fuzzy of the picture can occur in some high-speed scenes, and the film watching experience is greatly influenced, so that the motion track is estimated through the real-time frame interpolation technology, namely through the relation between adjacent frames, and multi-frame intermediate images are generated, so that the original 24 frames can be improved to 60 frames, and the film watching experience is enhanced.
Another example is: when the frame interpolation magnification is increased by 8 times, 16 times and other higher magnifications, 240 frames or more of videos are obtained through frame interpolation processing, and the slow video playing picture can be embodied, namely the slow motion presentation. The frame interpolation process can be seen in fig. 2, the rectangular area filled with the transverse lines is an intermediate image obtained after the track is estimated, the track is relatively coherent after the compensated intermediate image is interpolated, and the rectangular area filled with the grids is a repeated intermediate image obtained by learning the front/rear frame images after the motion track is estimated, and the browsing is relatively smooth after the intermediate image is further interpolated. Of course, a plurality of frames of intermediate images may also be directly generated based on motion trail estimation according to a relationship between two adjacent frames of images, and the embodiment of the present application is not limited thereto.
Optionally, the step of generating the third image according to the contents of the second target image and the third target image may specifically include:
determining a motion track of the target object according to the target object contained in the second target image and the third target image;
generating a third image containing the target object according to the motion trail of the target object;
wherein a position of a target object included in the third image on the motion trajectory is different from positions of target objects included in the second target image and the third target image on the motion trajectory.
Alternatively, the target object may be determined according to an operation position of a first input, for example, in a case where the first input to a first image is received, in response to the first input, an object in the first image at the operation position corresponding to the first input is extracted as the target object, and a motion trajectory of the target object is determined according to target objects included in the second target image and the third target image.
As shown in fig. 3, a slow motion picture 31 is schematically displayed, in which a second target image 41 including an object a in an initial video is shown in fig. 4, and a third target image 51 including an object E is shown in fig. 5. For example: the motion trajectory of the target object can be estimated and obtained based on the object a in the second target image and the object E in the third target object, as shown by the dotted line in fig. 3, and further after the motion trajectory is determined, a plurality of objects B can be determined by AI learning based on the actions of the object a and the object E, that is, one frame of the third image is obtained, and of course, a plurality of objects B, C, and D can be determined by AI learning based on the actions of the object a and the object E, that is, a plurality of frames of the third image is obtained. Therefore, the motion details of the slow motion picture 31 played after the intermediate image is inserted can be ensured to be more vivid and specific based on the motion trail estimation and AI learning, and the video playing picture can be ensured to be smoother.
Optionally, the step of inserting at least one frame of third image between the first image and the second image may specifically include:
determining a play frame rate corresponding to the input parameter of the first input;
and according to the playing frame rate, determining the number N of frames of a third image inserted between the first image and the second image, and inserting N frames of the third image between the first image and the second image, wherein N is a positive integer.
Optionally, in a case that the first input is a long press input, the input parameter may be an input duration of the long press input, such as: the longer the pressing time is, the larger the number of frames N of the inserted third image is, and the slower the embodied motion of the moving object is; alternatively, in case the first input is a press input, the input parameter may be a pressure value of the press, such as: the larger the pressure value of the pressing is, the larger the number of frames N of the inserted third image is, and the slower the motion of the moving object is reflected.
Optionally, the step of playing the first video inserted with the third image may specifically include:
determining a playing time length corresponding to the input parameter of the first input;
playing the target video part in the first video after the third image is inserted according to the playing time length;
wherein, the image corresponding to the target video part is: the first image, the second image, and the third image.
Optionally, in a case that the first input is a long press input, the input parameter may be an input duration of the long press input, such as: if the pressing time is longer, the playing time is shorter, delayed fast motion playing is presented, and if the pressing time is shorter, the playing time is longer, slow motion playing is presented; alternatively, in case the first input is a press input, the input parameter may be a pressure value of the press, such as: if the pressed pressure value is larger, the playing time length is smaller, delayed fast motion playing is presented, and if the pressed pressure value is smaller, the playing time length is larger, slow motion playing is presented.
Optionally, in the playing process of the first video, when the electronic device detects that there is a press input, the electronic device starts real-time frame image learning, and performs real-time frame insertion processing on part or all of the frame images in the first video, so that a user can browse the slow-motion presentation of the target object in the first video in real time. Further, the pressure value corresponding interaction of the press input can be defined as a frame image multiple of the interpolated frame, such as: a pressing intermediate value may be preset, and as shown in the section a-B in fig. 6, in the case that the pressure value of the pressing input is the pressing intermediate value, the first video is played in the initial state; when the pressure value of the pressing input is smaller than the pressing intermediate value, as shown in the section C-D in fig. 6, if the pressure value of the pressing input is larger, the motion of the moving object is displayed slower; in the case where the pressure value of the press input is greater than the press intermediate value, as indicated by the section E-F in fig. 6, the smaller the pressure value of the press input, the faster the motion of the moving object appears.
In the embodiment, the first videos with different frame rates can be previewed in real time based on the corresponding relation between the first input parameter and the video playing frame rate, the operation mode is convenient and fast, multiple times of operation adjustment are avoided, and the user operation is facilitated to be simplified to a certain extent.
As shown in fig. 7, an embodiment of the present application further provides a flowchart of a video playing method, which specifically includes:
step 701: when the first video is played, the electronic equipment detects whether press input exists in real time.
For example: the electronic device detects the pressed area and the time on the screen in real time.
Step 702: a start frame is determined based on the first input.
For example: and determining a target object in the current image by using the acquired coordinate values, and taking the image of the target object appearing for the first time in the first video playing process as a starting frame. Or, the currently displayed first image is taken as a start frame, etc., which is not limited in this embodiment of the present application.
Step 703: and comparing the pressure value input by pressing with a preset intermediate value.
Step 704: if the pressure value input by pressing is larger than the preset intermediate value and is larger and larger, the multiplying power of the frame to be inserted is gradually increased; if the pressure value of the press input is smaller than the preset intermediate value and is smaller, the multiplying power required for frame interpolation is gradually reduced.
Step 705: and if the pressure value of the press input is equal to the preset intermediate value or the pressure value of the press input is zero, the normal playing is resumed.
Step 706: and inserting the frame image in real time according to the multiplying power determined in the step 704, and/or determining the video playing time according to the pressure value input by pressing.
Step 707: and playing the first video after the frame image is inserted. For example: in the case that the playing time is determined in the above step 705, the first video inserted into the frame map is played according to the playing time to present slow motion playing or fast motion playing.
Step 708: and buffering the first video after the frame image is inserted. For example: the effect of real-time framing may also be buffered in the database.
Step 709: when the synthesis input of the user is received, the first video after frame insertion can be synthesized with other segments of the original video.
Step 710: and if the user's synthesis input is received, caching and releasing the first video after the frame insertion.
According to the scheme, in the process of playing the first video, the user can perform real-time frame insertion processing on the first video through pressing input, so that the picture of the first video after the played frame insertion has more details and is clearer and smoother. According to the scheme, the first videos with different frame rates can be previewed in real time based on the corresponding relation between the pressure value input by pressing and the video playing frame rate, the operation mode is convenient and fast, multiple operation adjustment is avoided, user operation is simplified to a certain extent, excessive memory space is avoided being occupied through the cache mode, and follow-up watching and sharing are facilitated through the synthesis and storage mode.
It should be noted that, in the video playing method provided in the embodiment of the present application, the execution main body may be a video playing device, or a control module used for executing the video playing method in the video playing device. In the embodiment of the present application, a video playing device executing a video playing method is taken as an example to describe the video playing device provided in the embodiment of the present application.
As shown in fig. 8, an embodiment of the present application further provides a video playback apparatus 800, including:
a receiving module 810, configured to receive a first input to a first object in a case where a first video is played; wherein the first object is a first image in the first video, or the first object is associated with the first image in the first video;
a response module 820, configured to insert at least one frame of a third image between the first image and the second image in response to the first input, and play the first video after the third image is inserted;
wherein the content of the third image is related to the content of the first image, and the second image comprises: at least one of a frame image preceding the first image and a frame image following the first image in the first video.
Optionally, the response module 820 includes:
the copying submodule is used for copying the first target image to obtain at least one frame of third image; wherein the first target image is: a part or all of the images from the first image to the second image;
and the first frame inserting sub-module is used for inserting the third image into a position adjacent to the first target image corresponding to the third image.
Optionally, the response module 820 includes:
the generation submodule is used for generating a third image according to the contents of the second target image and the third target image; wherein the content of the third image is different from the content of the second target image and the third target image, and the second target image and the third target image are: any two adjacent images from the first image to the second image;
a second frame insertion sub-module for inserting the third image between the second target image and the third target image.
Optionally, the generating sub-module includes:
a determining unit, configured to determine a motion trajectory of the target object according to the target object included in the second target image and the third target image;
the generating unit is used for generating a third image containing the target object according to the motion track of the target object;
wherein a position of a target object included in the third image on the motion trajectory is different from positions of target objects included in the second target image and the third target image on the motion trajectory.
Optionally, the response module 820 includes:
the first determining submodule is used for determining a playing frame rate corresponding to the input parameter of the first input;
and a second determining submodule, configured to determine, according to the play frame rate, a frame number N of a third image inserted between the first image and the second image, and insert N frames of the third image between the first image and the second image, where N is a positive integer.
Optionally, the response module 820 includes:
a first playing sub-module, configured to play the first video data inserted with the third image from the second image when the second image is located before the first image;
and the second playing sub-module is used for playing the first video data inserted with the third image from the first image under the condition that the second image is positioned behind the first image.
Optionally, the response module 820 includes:
a third determining submodule, configured to determine a playing time length corresponding to the first input parameter;
the third playing submodule is used for playing the target video part in the first video after the third image is inserted into the first video according to the playing time length;
wherein, the image corresponding to the target video part is: the first image, the second image, and the third image.
The video playing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video playing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
In the embodiment of the present application, in the case of playing the first video, if the first input is received, at least one frame of the third image is inserted between the first image and the second image in the first video, so that the content of the first video is enriched because the content of the inserted third image is related to the content of the first image, so that when the first video inserted after the third image is played, a user can view more details, and on the basis of avoiding excessive occupied space of video data, the detail presentation during playing of the video is realized.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in the embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction that is stored in the memory 902 and is executable on the processor 901, where when the program or the instruction is executed by the processor 901, the processes in the embodiment of the video playing method are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The radio frequency unit 1001 is configured to receive a first input to a first object when a first video is played; wherein the first object is a first image in the first video, or the first object is associated with the first image in the first video.
A processor 1010, configured to insert at least one frame of a third image between the first image and the second image in response to the first input, and play the first video inserted with the third image through the display unit 1006, or play the first video inserted with the third image through the audio output unit 1003 and the display unit 1006.
Wherein the content of the third image is related to the content of the first image, the second image comprising: at least one of a frame of image in the first video that precedes the first image and a frame of image that follows the first image.
Optionally, the processor 1010 is configured to perform copy processing on the first target image to obtain at least one frame of a third image; wherein the first target image is: a part or all of the image from the first image to the second image; inserting the third image to a position adjacent to a first target image corresponding to the third image.
Optionally, the processor 1010 is further configured to generate a third image according to the contents of the second target image and the third target image; wherein the content of the third image is different from the content of the second target image and the third target image, and the second target image and the third target image are: any two adjacent images from the first image to the second image; inserting the third image between the second target image and the third target image.
Optionally, the processor 1010 is further configured to determine a motion trajectory of the target object according to the target object included in the second target image and the third target image; generating a third image containing the target object according to the motion trail of the target object; wherein a position of a target object included in the third image on the motion trajectory is different from positions of target objects included in the second target image and the third target image on the motion trajectory.
Optionally, the processor 1010 is further configured to determine a play frame rate corresponding to the input parameter of the first input; and according to the playing frame rate, determining the number N of frames of a third image inserted between the first image and the second image, and inserting N frames of the third image between the first image and the second image, wherein N is a positive integer.
Optionally, the processor 1010 is further configured to play, starting from the second image, the first video data inserted with the third image through the display unit 1006 or play the first video data inserted with the third image through the audio output unit 1003 and the display unit 1006, if the second image is located before the first image;
optionally, the processor 1010 is further configured to play, starting from the first image, the first video data inserted with the third image through the display unit 1006, or play the first video data inserted with the third image through the audio output unit 1003 and the display unit 1006, if the second image is located after the first image.
Optionally, the processor 1010 is further configured to determine a playing duration corresponding to the input parameter of the first input; and playing the target video part in the first video inserted with the third image according to the playing time length through the display unit 1006, or playing the target video part in the first video inserted with the third image according to the playing time length through the audio output unit 1003 and the display unit 1006.
Wherein, the image corresponding to the target video part is: the first image, the second image, and the third image.
In the embodiment of the application, in the case of playing the first video, if the first input is received, at least one frame of third image is inserted between the first image and the second image in the first video, so that the content of the first video is enriched because the content of the inserted third image is related to the content of the first image, so that when the first video inserted after the third image is played, a user can watch more details, and on the basis of avoiding excessive occupied space of video data, the detail presentation during video playing is realized.
It should be understood that, in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video playing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (15)

1. A video playback method, comprising:
receiving a first input to a first object in a case where a first video is played; wherein the first object is a first image in the first video, or the first object is a target object of the first image in the first video, and the first object is associated with the first image in the first video;
responding to the first input, inserting at least one frame of third image between the first image and the second image, and playing the first video after the third image is inserted;
wherein the content of the third image is related to the content of the first image, and the second image comprises: at least one of a frame image preceding the first image and a frame image following the first image in the first video.
2. The video playing method according to claim 1, wherein said inserting at least one frame of a third image between the first image and the second image comprises:
copying the first target image to obtain at least one frame of third image; wherein the first target image is: a part or all of the images from the first image to the second image;
inserting the third image to a position adjacent to a first target image corresponding to the third image.
3. The video playing method according to claim 1, wherein said inserting at least one frame of a third image between the first image and the second image comprises:
generating a third image according to the contents of the second target image and the third target image; wherein the content of the third image is different from the content of the second target image and the third target image, and the second target image and the third target image are: any two adjacent images from the first image to the second image;
inserting the third image between the second target image and the third target image.
4. The video playing method according to claim 3, wherein the generating the third image according to the contents of the second target image and the third target image comprises:
determining a motion track of the target object according to the target object contained in the second target image and the third target image;
generating a third image containing the target object according to the motion trail of the target object;
wherein a position of a target object included in the third image on the motion trajectory is different from positions of target objects included in the second target image and the third target image on the motion trajectory.
5. The video playing method according to claim 1, wherein said inserting at least one frame of a third image between the first image and the second image comprises:
determining a play frame rate corresponding to the input parameter of the first input;
and according to the playing frame rate, determining the number N of frames of a third image inserted between the first image and the second image, and inserting N frames of the third image between the first image and the second image, wherein N is a positive integer.
6. The video playing method according to claim 1, wherein said playing the first video after inserting the third image comprises:
under the condition that the second image is positioned in front of the first image, playing the first video after the third image is inserted from the second image;
and playing the first video after the third image is inserted from the first image under the condition that the second image is positioned behind the first image.
7. The video playing method according to claim 1, wherein said playing the first video after inserting the third image comprises:
determining a playing time length corresponding to the input parameter of the first input;
playing the target video part in the first video after the third image is inserted according to the playing time length;
wherein, the image corresponding to the target video part is: the first image, the second image, and the third image.
8. A video playback apparatus, comprising:
the receiving module is used for receiving a first input of a first object under the condition of playing a first video; wherein the first object is a first image in the first video, or the first object is a target object of the first image in the first video, and the first object is associated with the first image in the first video;
the response module is used for responding to the first input, inserting at least one frame of third image between the first image and the second image, and playing a first video after the third image is inserted;
wherein the content of the third image is related to the content of the first image, and the second image comprises: at least one of a frame image preceding the first image and a frame image following the first image in the first video.
9. The video playback device of claim 8, wherein the response module comprises:
the copying submodule is used for copying the first target image to obtain at least one frame of third image; wherein the first target image is: a part or all of the image from the first image to the second image;
and the first frame inserting sub-module is used for inserting the third image to a position adjacent to the first target image corresponding to the third image.
10. The video playback device of claim 8, wherein the response module comprises:
the generation submodule is used for generating a third image according to the contents of the second target image and the third target image; wherein the content of the third image is different from the content of the second target image and the third target image, and the second target image and the third target image are: any two adjacent images from the first image to the second image;
a second frame insertion sub-module for inserting the third image between the second target image and the third target image.
11. The video playback device of claim 10, wherein the generating sub-module comprises:
a determining unit, configured to determine a motion trajectory of the target object according to the target object included in the second target image and the third target image;
the generating unit is used for generating a third image containing the target object according to the motion trail of the target object;
wherein a position of a target object included in the third image on the motion trajectory is different from positions of target objects included in the second target image and the third target image on the motion trajectory.
12. The video playback device of claim 8, wherein the response module comprises:
the first determining submodule is used for determining a playing frame rate corresponding to the input parameter of the first input;
a second determining sub-module, configured to determine, according to the play frame rate, a number N of frames of a third image inserted between the first image and the second image, and insert N frames of the third image between the first image and the second image, where N is a positive integer.
13. The video playback device of claim 8, wherein the response module comprises:
a first playing sub-module, configured to play the first video data inserted with the third image from the second image when the second image is located before the first image;
and the second playing sub-module is used for playing the first video data inserted into the third image from the first image under the condition that the second image is positioned behind the first image.
14. The video playback device of claim 8, wherein the response module comprises:
a third determining submodule, configured to determine a playing time length corresponding to the first input parameter;
the third playing submodule is used for playing the target video part in the first video after the third image is inserted into the first video according to the playing time length;
wherein, the image corresponding to the target video part is: the first image, the second image, and the third image.
15. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the video playback method as claimed in any one of claims 1 to 7.
CN202011408989.1A 2020-12-04 2020-12-04 Video playing method and device and electronic equipment Active CN112565868B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011408989.1A CN112565868B (en) 2020-12-04 2020-12-04 Video playing method and device and electronic equipment
PCT/CN2021/134313 WO2022116962A1 (en) 2020-12-04 2021-11-30 Video playback method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011408989.1A CN112565868B (en) 2020-12-04 2020-12-04 Video playing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112565868A CN112565868A (en) 2021-03-26
CN112565868B true CN112565868B (en) 2022-12-06

Family

ID=75048256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011408989.1A Active CN112565868B (en) 2020-12-04 2020-12-04 Video playing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN112565868B (en)
WO (1) WO2022116962A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565868B (en) * 2020-12-04 2022-12-06 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN113271494B (en) * 2021-04-16 2023-04-11 维沃移动通信有限公司 Video frame processing method and device and electronic equipment
CN113207038B (en) * 2021-04-21 2023-04-28 维沃移动通信(杭州)有限公司 Video processing method, video processing device and electronic equipment
CN115103054B (en) * 2021-09-22 2023-10-13 维沃移动通信(杭州)有限公司 Information processing method, device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396165A (en) * 2016-05-16 2017-11-24 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN110933315A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Image data processing method and related equipment
CN110996170A (en) * 2019-12-10 2020-04-10 Oppo广东移动通信有限公司 Video file playing method and related equipment
CN111813490A (en) * 2020-08-14 2020-10-23 Oppo广东移动通信有限公司 Method and device for processing interpolation frame
CN111918099A (en) * 2020-09-16 2020-11-10 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10165245B2 (en) * 2012-07-06 2018-12-25 Kaltura, Inc. Pre-fetching video content
US9727185B2 (en) * 2015-03-06 2017-08-08 Apple Inc. Dynamic artifact compensation systems and methods
CN110913260B (en) * 2018-09-18 2023-07-14 阿里巴巴(中国)有限公司 Display control method, display control device and electronic equipment
CN111083417B (en) * 2019-12-10 2021-10-19 Oppo广东移动通信有限公司 Image processing method and related product
CN111064863B (en) * 2019-12-25 2022-04-15 Oppo广东移动通信有限公司 Image data processing method and related device
CN112565868B (en) * 2020-12-04 2022-12-06 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN113014937B (en) * 2021-02-24 2022-09-16 北京百度网讯科技有限公司 Video frame insertion method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396165A (en) * 2016-05-16 2017-11-24 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN110933315A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Image data processing method and related equipment
CN110996170A (en) * 2019-12-10 2020-04-10 Oppo广东移动通信有限公司 Video file playing method and related equipment
CN111813490A (en) * 2020-08-14 2020-10-23 Oppo广东移动通信有限公司 Method and device for processing interpolation frame
CN111918099A (en) * 2020-09-16 2020-11-10 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112565868A (en) 2021-03-26
WO2022116962A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
CN112565868B (en) Video playing method and device and electronic equipment
CN107181976A (en) A kind of barrage display methods and electronic equipment
JP6321301B2 (en) Video special effect processing method, apparatus, terminal device, program, and recording medium
CN112954199B (en) Video recording method and device
WO2022143525A1 (en) Video playing method and apparatus, and electronic device
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN114613306A (en) Display control chip, display panel and related equipment, method and device
CN114466232A (en) Video processing method, video processing device, electronic equipment and medium
CN112905134A (en) Method and device for refreshing display and electronic equipment
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN115543137A (en) Video playing method and device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN113810538B (en) Video editing method and video editing device
CN113852756B (en) Image acquisition method, device, equipment and storage medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN111757177B (en) Video clipping method and device
CN115484490A (en) Video processing method, device, equipment and storage medium
CN115002551A (en) Video playing method and device, electronic equipment and medium
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN113923514A (en) Display device and MEMC (motion estimation and motion estimation) repeated frame discarding method
CN113852774A (en) Screen recording method and device
CN113271494A (en) Video frame processing method and device and electronic equipment
CN112883306A (en) Page display method and device
CN112565909A (en) Video playing method and device, electronic equipment and readable storage medium
CN112418942A (en) Advertisement display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant