CN114173177B - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114173177B
CN114173177B CN202111466837.1A CN202111466837A CN114173177B CN 114173177 B CN114173177 B CN 114173177B CN 202111466837 A CN202111466837 A CN 202111466837A CN 114173177 B CN114173177 B CN 114173177B
Authority
CN
China
Prior art keywords
video
target
target video
frame sequence
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111466837.1A
Other languages
Chinese (zh)
Other versions
CN114173177A (en
Inventor
张继丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111466837.1A priority Critical patent/CN114173177B/en
Publication of CN114173177A publication Critical patent/CN114173177A/en
Application granted granted Critical
Publication of CN114173177B publication Critical patent/CN114173177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections

Abstract

The disclosure provides a video processing method, a device, equipment and a storage medium, relates to the technical field of data processing, and particularly relates to the field of intelligent recommendation. The specific implementation scheme is as follows: playing a target video in a video playing area of a current display interface; and under the condition that the preset condition is met, displaying at least part of video frames in a target video frame sequence, wherein the target video frame sequence is selected from the target video. Thus, the user experience is enriched, and the user experience is improved.

Description

Video processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technology, and in particular, to intelligent recommendation.
Background
With the continuous development of internet technology, more and more users like to watch video through a network. Therefore, in order to meet the personalized requirements of users, the user experience is improved, and the network video needs to be continuously optimized.
Disclosure of Invention
The present disclosure provides a video processing method, apparatus, device, and storage medium.
According to an aspect of the present disclosure, there is provided a video processing method including:
Playing a target video in a video playing area of a current display interface;
and under the condition that the preset condition is met, displaying at least part of video frames in a target video frame sequence, wherein the target video frame sequence is selected from the target video.
According to another aspect of the present disclosure, there is provided a video processing apparatus including:
the playing unit is used for playing the target video in the video playing area of the current display interface;
and the display unit is used for displaying at least part of video frames in a target video frame sequence under the condition that the preset condition is met, wherein the target video frame sequence is selected from the target video.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described method.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method described above.
Therefore, the video frame display method and device can actively display part of video frames in the played video, so that a user can conveniently select a required target image from the displayed video frames, user experience is enriched, and meanwhile, the user experience is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow diagram of an implementation of a video processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of video playback effects in a specific example of a video processing method according to an embodiment of the present disclosure;
fig. 3 (a) to 3 (c) are schematic diagrams illustrating a positional relationship between an image display area and a video playing area in a video processing method according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of an image presentation area presenting thumbnail in a particular example of a video processing method according to an embodiment of the present disclosure;
FIGS. 5 (a) and 5 (b) are schematic interface diagrams of a video processing method in a particular process flow according to an embodiment of the disclosure;
FIG. 6 is an interface diagram of a video processing method in another particular process flow according to an embodiment of the present disclosure;
fig. 7 is a schematic structural view of a video processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing a video processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The disclosed scheme provides a video processing method; specifically, as shown in fig. 1, the method includes:
step S101: and playing the target video in the video playing area of the current display interface. Here, the video playing area may be a part of the display area in the current display interface, or may be the whole display area, which is not limited in the present disclosure.
Step S102: and under the condition that the preset condition is met, displaying at least part of video frames in a target video frame sequence, wherein the target video frame sequence is selected from the target video. In this way, the user can select a desired target image from the displayed video frames.
In the present disclosure, the preset condition may be set based on an actual scene requirement, which is not limited in the present disclosure.
In this disclosure, the target video frame sequence includes at least two video frames. Accordingly, the video frame may be one frame of image, or two or more frames of image, which is not limited in the present disclosure.
It should be noted that, at least part of the video frames in the target video frame sequence may be displayed synchronously in the process of playing the target video, or may be displayed after the playing of the target video is stopped.
In this way, the method and the device can display at least part of video frames in the target video frame sequence selected from the target video under the condition that the preset condition is met, so that a user can select from the displayed video frames, in other words, the method and the device can actively provide video frames for display and for user selection.
In a specific example of the disclosed approach, the target video frame sequence is derived based on at least one of:
first kind: the target video frame sequence is obtained based on the user behavior characteristics corresponding to the target video; that is, a sequence of target video frames is selected from the target video based on user behavior characteristics. For example, the user behavior feature may specifically include historical behavior implemented on the target video, such as user comment data, user screen capture, and the like. In this case, the video frames in the selected target video frame sequence may be continuous frames or discontinuous frames.
Second kind: the target video frame sequence is obtained based on the image characteristic information of video frames in the target video; that is, the target video frame sequence is selected from the target video based on image feature information. For example, the image characteristic information may include sharpness, level of sophistication compared to other images, integrity of the images, and so forth. In this way, selection of a highlight frame from the target video based on image feature information is facilitated. In this case, the video frames in the selected target video frame sequence may be continuous frames or discontinuous frames.
Third kind: the target video frame sequence is obtained based on a highlight contained in the target video, for example, the target video frame sequence is a highlight, and at this time, the video frames in the target video frame sequence are continuous frames. It should be noted that the highlight may also be determined based on a user's historical behavior, for example, N consecutive video frames with the largest number of comments are taken as a highlight, N is a positive integer greater than or equal to 2, and the disclosure is not limited thereto. Alternatively, the target video frame sequence is a part of video frames selected from highlight clips contained in the target video, and at this time, the video frames in the target video frame sequence are discontinuous frames.
It should be noted that the selection process of the target video sequence may be one of the above three modes, or any two of the above three modes, or three of the above three modes, which is not limited in the present disclosure.
Therefore, the selection modes of the target video frame sequence are enriched, a foundation is laid for providing video frames meeting the demands of users to the greatest extent, and a foundation is laid for further improving the user experience.
In a specific example of the present disclosure, in a case where the video frames included in the target video frame sequence are consecutive frames, such as in a case of a highlight in the target video, the target video frame sequence is played in the video playing area at a playing speed smaller than that of other video frames in the target video, so as to slowly display the target video frame sequence. In other words, in the case where the target video frame sequence is a continuous video frame, other frames in the target video are played at a normal speed, and the target video frame sequence is played at a speed slightly slower than the normal speed.
Or playing the highlight corresponding to the target video frame sequence in the video playing area at a playing speed smaller than that of other video frames in the target video under the condition that the video frames contained in the target video frame sequence are discontinuous frames, so as to slowly display the highlight. That is, when the target video frame sequence is a discontinuous frame selected from the highlight, the highlight corresponding to the target video frame sequence is played in the video playing area at a playing speed smaller than that of other video frames in the target video, so as to slowly display the highlight.
In this way, a variable speed playing mode is provided, so that a user can browse the target video frame sequence or the highlight at a low speed, and the user experience is further enriched. In addition, the variable-speed playing mode, namely the target video frame sequence or the highlight is played slowly, other video frames are played at normal speed, no operation is needed by a user, and the requirement that the user needs to play the highlight slowly is met, so that the user experience is further improved.
For example, as shown in fig. 2, the target video includes a target video frame sequence and other video frame sequences (i.e., a normal video frame sequence), where during the video playing process, the normal video frames may be played at a normal playing speed, such as 24 frames per second(s) (i.e., 24 frames/s), for example, in one second, the 1 st frame, the 2 nd frame, the … … th frame, the m-th frame, the … … th frame are played continuously until the 24 th frame; the target video frame sequence is played at a speed of 4 frames per second (i.e. 4 frames/s), for example, in one second, the 1 st frame, the 2 nd frame, the 3 rd frame and the 4 th frame are played continuously; therefore, the variable speed playing effect is displayed, namely, only when the target video frame sequence is played, the target video frame sequence is played at the speed of 4 frames per second, and other video frames are played at the speed of 24 frames per second(s), so that the requirement that a user needs to slowly play the highlight is met, and the user experience is further improved.
It should be noted that the foregoing is merely illustrative, and is not intended to limit the disclosure, and in an actual scenario, the specific manner of normal speed playing or slow speed playing may also be set based on actual requirements, or set based on user requirements.
In a specific example of the present disclosure, at least a portion of the video frames in the target video frame sequence may be presented in the following manner:
mode one: specifically, the displaying at least part of the video frames in the target video frame sequence specifically includes: at least a portion of the video frames in the sequence of target video frames are presented in an image presentation area of the current display interface. That is, there is an image display area in the current display interface, at this time, the video display area is used for playing video, and the image display area is used for displaying video frames, so that different requirements of users are considered, and different requirements of users are met.
In a specific example of the present disclosure, the positional relationship between the image display area and the video playing area is any one of the following:
first kind: the image display area is the same as the video playing area; as shown in fig. 3 (a), the image display area is completely overlapped with or identical to the video display area, in other words, it is understood that the image display area is not separately provided, and the video display area combines the video display and the image display functions. In this case, the video playing process and the image displaying process cannot be performed simultaneously, and at least part of the video frames in the target video frame sequence can be displayed only after the video playing is stopped.
It should be noted that, the display area of the video playing area in the display interface may be determined based on actual requirements, for example, all the display areas of the display interface may be used as the video playing area, or part of the display areas in the display interface may be used as the video playing area, and other part of the display areas may be used as the display areas of other information.
Second kind: the image display area covers the video playing area, and the display area of the image display area is larger than that of the video playing area; in this manner, the image display area may be understood as being displayed in the display interface in the form of a pop-up window.
For example, as shown in fig. 3 (b), the image display area is overlaid on the video playing area, and the display area of the image display area is larger than the display area of the video playing area (in this example, the display area of the video playing area is the same as the display area of the video playing area shown in fig. 3 (a)), so that the recommended video frame is prominently displayed. In this case, the video playing process and the image displaying process cannot be performed simultaneously, and at least part of the video frames in the target video frame sequence can be displayed only after the video playing is stopped.
In this example, the image display area is not completely covered on the video playing area, but is partially covered on the video playing area, and it should be noted that, in practical application, the positional relationship between the two may be set based on the actual requirement, which is not limited in this aspect of the disclosure, so long as the recommended video frame can be highlighted.
In addition, it should be noted that the display areas of the image display area and the video playing area in the display interface may be determined based on actual requirements; for example, all display areas of the display interface can be used as image display areas, part of the display areas of the display interface are used as video playing areas, overlapping parts exist between the image display areas and the video playing areas, when video frame recommendation is needed, the image display areas are covered on the video playing areas, and at the moment, the video playing areas are in a stopped playing state; or, a part of the display area in the display interface is used as an image display area or a video playing area, and at this time, the rest of the other part of the display area is used as a display area of other information, which is not limited in the scheme of the present disclosure.
Third kind: and the image display area is positioned in other areas except the video playing area in the display interface. As shown in fig. 3 (c), at least two areas, namely a video playing area and an image displaying area, are displayed in the display interface, and are not overlapped; in this case, the video playing process and the image displaying process may be performed simultaneously, and it is not necessary to display at least part of the video frames in the target video frame sequence after the video playing is stopped.
In addition, it should be noted that the display areas of the image display area and the video playing area in the display interface may be determined based on actual requirements; in addition, other information display areas can exist in the display interface, and the scheme of the disclosure is not limited to the display area.
Therefore, the scheme of the disclosure provides different display effects of the display interface, lays a foundation for being compatible with the existing video display mode, minimizing, optimizing and adjusting the existing video display mode, and simultaneously lays a foundation for meeting different requirements of users. Moreover, the method is simple and intelligent.
Mode two: specifically, the displaying at least part of the video frames in the target video frame sequence specifically includes: and jumping from the current display interface to the next display interface, and displaying at least part of video frames in the target video frame sequence in an image display area of the next display interface. In other words, in the above manner, the display interface is not required to be skipped, but in this manner, in the case where the recommended video frame is required to be displayed, the current display interface is skipped to the next display interface, and at least part of the video frames in the target video frame sequence are displayed in the display interface after the skip.
Therefore, the scheme of the disclosure provides different display effects of the display interface, lays a foundation for being compatible with the existing video display mode, minimizing, optimizing and adjusting the existing video display mode, and simultaneously lays a foundation for meeting different requirements of users. Moreover, the method is simple and intelligent.
Mode three: specifically, the displaying at least part of the video frames in the target video frame sequence as shown in fig. 4 includes: displaying all video frames in the target video frame sequence in thumbnail form; alternatively, portions of the video frames in the sequence of target video frames are presented in thumbnail form.
It should be noted that the first and second modes focus on the interface presentation form, and focus on the video frame presentation form in this example. Therefore, the third mode can be applied to the first mode or the second mode. For example, in an image display area as shown in fig. 3 (a), all video frames in the target video frame sequence are displayed in thumbnail form; alternatively, a portion of the video frames in the sequence of target video frames are presented. As another example, in the image display area as shown in fig. 3 (b), all video frames in the target video frame sequence are displayed in thumbnail form; alternatively, a portion of the video frames in the sequence of target video frames are presented. As another example, in the image display area as shown in fig. 3 (c), all video frames in the target video frame sequence are displayed in thumbnail form; alternatively, a portion of the video frames in the sequence of target video frames are presented. For another example, after the second mode jumps to the next display interface, displaying all video frames in the target video frame sequence in the next display interface in a thumbnail form; alternatively, a portion of the video frames in the sequence of target video frames are presented.
Therefore, the specific display effect of the video frames in the image display area is provided, a foundation is laid for compatibility with the existing video display mode, minimization, optimization and adjustment of the existing video display mode, and meanwhile, a foundation is laid for meeting different requirements of users. Moreover, the method is simple and intelligent.
In a specific example of the present disclosure, to further fit the actual scene requirement, to meet the specific requirement of the user in the actual scene, the method further includes:
and under the condition that all video frames in the target video frame sequence are not displayed, responding to a sliding operation, rolling and displaying the video frames in the target video frame sequence in an image display area of a current display interface so as to adjust the display positions of the video frames in the target video frame sequence in the image display area, and displaying at least part of the video frames in the target video frame sequence which are not displayed in the image display area so as to enable a user to select a required target image from the displayed video frames. That is, in order to consider the existing operation habit of the user, the scheme of the present disclosure may also be compatible with the sliding operation step, so as to update the video frame displayed in the current display interface, so that the video frame not displayed in the display interface is displayed to the user in a sliding operation manner, so that the user can select the video frame.
It will be appreciated that the current display interface is the display interface currently being browsed by the user. For example, in the second mode, that is, in the case that the display interface is jumped, the current display interface in this example is the next display interface after the jump.
It should be noted that, in order to be further compatible with other touch operations, in this example, the sliding operation needs to be specifically a sliding operation in a specific area, for example, a sliding slave operation in an image display area.
In a specific example of the present disclosure, determining that the preset condition is met if any one of the following conditions is met includes:
first kind: responding to a first touch operation for a video frame control in a current display interface; that is, in the process of playing the target video in the video playing area, at least part of video frames in the target video frame sequence are displayed in response to the first touch operation for the video frame control in the current display interface, so that a user can select a required target image from the displayed video frames.
For example, as shown in fig. 5 (a), during the process of playing the target video in the video playing area, in response to the first touch operation for the video frame control in the current display interface, the image display area is popped up, and at least part of the video frames in the target video frame sequence are displayed in the image display area, so that the user can select a required target image from the displayed video frames.
Here, in the playing process of the target video, if the target video frame sequence is played, and the target video frame sequence is a continuous frame, the target video frame sequence can be played slowly, and other video frames can be played at a normal speed.
It can be appreciated that the location where the video frame control is set, and the timing presented, can be set based on actual requirements; for example, the display device is arranged in a video playing area or a non-video playing area in a display interface; for example, the video frame control is displayed while the target video is played, or the video frame control is displayed after the target video frame sequence is played, or the video frame control is displayed after the target video is played, which is not limited in the scheme of the disclosure.
In addition, it can be understood that in this manner, if the video playing area and the image displaying area are located in different areas of the display interface and are not overlapped with each other, at least part of the video frames in the target video frame sequence can be synchronously displayed in the process of playing the target video; alternatively, the playing of the target video may also be stopped for focusing, and then at least part of the video frames of the sequence of target video frames are presented in the image presentation area.
Second kind: responding to a second touch operation for the target video, and stopping playing the target video; that is, in the process of playing the target video in the video playing area, the playing of the target video is stopped in response to a second touch operation, such as a click operation, a double click operation, or the like, for the target video, and then at least part of the video frames in the target video frame sequence are displayed for the user to select a required target image from the displayed video frames. In other words, after the target video stops playing, at least part of the video frames in the target video frame sequence are automatically displayed.
Third kind: and determining that the target video is completely played. That is, after the target video is played, at least part of the video frames in the target video frame sequence are automatically displayed, so that a user can select a required target image from the displayed video frames.
For example, as shown in fig. 5 (b), during the playing of the target video in the video playing area, a stopping operation for the target video is detected, or after the playing of the target video is completed, at least part of the video frames in the target video frame sequence are displayed in the image displaying area, for example, the video frame 1 and the video frame 2 in the target video frame sequence are displayed in the form of thumbnails, so that the user can select a desired target image from the displayed video frames.
In practical applications, the above three modes may be alternatively executed, or any two or three of the three modes are compatible, which is not limited by the scheme of the present disclosure.
In this way, the scheme provides the display time for displaying at least part of video frames in the target video frame sequence, thus providing an intelligent man-machine interaction interface and laying a foundation for further enriching user experience and improving user experience.
In a specific example of the disclosed scheme, the third touch operation on the displayed video frame may be further responded, that is, the video frame selected by the third touch operation is used as the target image in response to the selection operation on the displayed video frame, and the target image is displayed in an enlarged manner, so that the display area of the enlarged target image is larger than the display area of the original video frame, and the display area of the enlarged target image is also larger than the display area of the target image before the enlargement processing, which is convenient for the user to perform the subsequent operation, and further improves the user experience.
In a specific example of the disclosed solution, in order to further facilitate a user to perform a subsequent operation on a target image, in a case where the target image is selected, other operation controls for the target image are displayed in a current display interface, so that the user may perform other operations on the target image.
In a specific example of the disclosed scheme, after other operation controls for the target image are displayed in a current display interface, responding to fourth touch operation for the other operation controls, and performing operation matched with the other operation controls on the target image; here, the other operation control is at least one of: the control is saved, shared and edited. Thus, the subsequent operation of the target image is completed, and a foundation is laid for meeting the operation requirements of different users.
It will be appreciated that the touch operation described above may be specifically a click operation with respect to the touch display screen, similar to the existing operation manner, and the disclosure is not limited thereto.
For example, as shown in fig. 6, in the process of playing the target video in the video playing area, in response to a clicking operation for the video frame control in the current display interface, for convenience of focusing by the user, playing of the target video may be stopped, and at least part of video frames in the target video frame sequence are displayed in the image display area, for example, video frame 1 and video frame 2 in the target video frame sequence are displayed in thumbnail form. Here, the image display area may also be used to respond to the sliding operation, so as to view other video frames in the target video frame sequence, so that the user can select a desired target image from the displayed video frames. Further, when the user selects the video frame 1, the video frame 1 is the target image, and the target image is displayed in an enlarged manner. Further, a storage control, a sharing control and an editing control for the target video can be displayed in a display interface, so that a user can perform subsequent operation on the target image based on actual requirements.
It is to be appreciated that the locations where the save control, the share control, and the edit control are set may be set based on actual needs, as the present disclosure is not limited in this regard.
In this way, the method and the device can display at least part of video frames in the target video frame sequence selected from the target video under the condition that the preset condition is met, so that a user can select from the displayed video frames, in other words, the method and the device can actively provide video frames for display and for user selection.
The present disclosure is described in further detail below with reference to specific examples; in particular, with the progress of internet technology, video playback technology has also been rapidly developed and widely used. For example, a user may complete related interactions (comments, collections, screenshots, etc.) while viewing video on an Application (APP). However, there may be the following cases: when watching a video, a user wants to obtain a highlight frame in a screenshot mode, at the moment, the screenshot operation is completed through the screenshot function of the mobile phone system, however, at the moment, the image obtained after the screenshot operation often has the content of a non-video frame, for example, the content of a part of the mobile phone interface, and thus, the user experience is obviously reduced. In addition, if the content of the non-video frame is to be removed, operations such as image editing and the like are required, and the operation flow is complicated. Based on this, in order to solve the above-mentioned problems, the disclosed examples provide a method for actively displaying a highlight frame, so as to further improve the intelligence of video software and improve the user experience.
Specifically, examples of the present disclosure may include the following core modules:
the video image frame acquisition module is used for obtaining a highlight of the video to be played by combining a deep learning technology with user behavior analysis and the like; furthermore, the highlight can be subjected to variable speed processing, such as slow play; at the same time, a plurality of video frames in the highlight for user selection are obtained. Here, the deep learning technique refers to identifying image features in a video frame by a model trained by the deep learning technique, and then selecting a highlight. User behavior analysis can be understood as calibrating video frames in a video to be played based on comment data (such as a certain screenshot issued by a user, barrage content and the like), so that video frames of interest to the user are obtained.
The video image frame processing module is used for playing the highlight clips determined by the video image frame acquisition module at a low speed under the condition that the time for playing the highlight clips is reached; and under the condition that the preset condition is met, displaying the plurality of video frames which are determined by the video image frame acquisition module and are selected by the user so as to be selected by the user.
Therefore, compared with the existing scheme, the scheme disclosed by the invention is simple to operate, and a user does not need to manually acquire the wonderful frames, so that the intellectualization of video playing software is further improved; meanwhile, the user operation is simplified; in addition, as the user can directly obtain the original video frame, the content of the non-video frame does not need to be manually removed, the operation steps are simplified, and the time of the user is saved; the whole flow is simple and convenient in the scheme, and user experience is improved while the user experience is enriched.
The present disclosure also provides a video processing apparatus, as shown in fig. 7, including:
a playing unit 701, configured to play a target video in a video playing area of a current display interface;
and the display unit 702 is configured to display at least a part of video frames in a target video frame sequence when a preset condition is met, where the target video frame sequence is selected from the target video.
In a specific example of the disclosed approach, the target video frame sequence is derived based on at least one of:
the target video frame sequence is obtained based on the user behavior characteristics corresponding to the target video;
the target video frame sequence is obtained based on the image characteristic information of video frames in the target video;
the sequence of target video frames is derived based on highlight clips contained in the target video.
In a specific example of the disclosed solution, wherein,
the playing unit is further configured to play the target video frame sequence at a playing speed that is less than a playing speed of other video frames in the target video in the video playing area, so as to slowly display the target video frame sequence, when the video frames included in the target video frame sequence are continuous frames; or playing the highlight corresponding to the target video frame sequence in the video playing area at a playing speed smaller than that of other video frames in the target video under the condition that the video frames contained in the target video frame sequence are discontinuous frames, so as to slowly display the highlight.
In a specific example of the disclosed solution, wherein,
the display unit is specifically configured to display at least part of video frames in the target video frame sequence in an image display area of the current display interface.
In a specific example of the present disclosure, the positional relationship between the image display area and the video playing area is any one of the following:
the image display area is the same as the video playing area;
the image display area covers the video playing area, and the display area of the image display area is larger than that of the video playing area;
and the image display area is positioned in other areas except the video playing area in the display interface.
In a specific example of the disclosed solution, wherein,
the display unit is specifically configured to skip from a current display interface to a next display interface, and display at least part of video frames in the target video frame sequence in an image display area of the next display interface.
In a specific example of the disclosed solution, wherein,
the display unit is specifically configured to display all video frames in the target video frame sequence in a thumbnail form; alternatively, portions of the video frames in the sequence of target video frames are presented in thumbnail form.
In a specific example of the present disclosure, further comprising:
and the first operation processing unit is used for responding to the sliding operation under the condition that all video frames in the target video frame sequence are not displayed, scrolling and displaying the video frames in the target video frame sequence in an image display area of a current display interface so as to adjust the display positions of the video frames in the target video frame sequence in the image display area, and displaying at least part of the video frames in the target video frame sequence which are not displayed in the image display area so as to enable a user to select a required target image from the displayed video frames.
In a specific example of the present disclosure, wherein determining that the preset condition is met if any one of the following conditions is met includes:
responding to a first touch operation for a video frame control in a current display interface;
responding to a second touch operation for the target video, and stopping playing the target video;
and determining that the target video is completely played.
In a specific example of the present disclosure, further comprising:
and the second operation processing unit is used for responding to a third touch operation on the displayed video frame, taking the video frame selected by the third touch operation as a target image, and amplifying and displaying the target image so that the display area of the amplified target image is larger than that of the original video frame.
In a specific example of the disclosed solution, wherein,
and the display unit is also used for displaying other operation controls aiming at the target image in the current display interface under the condition that the target image is selected so as to enable a user to perform other operations on the target image.
In a specific example of the present disclosure, further comprising:
the third operation processing unit is used for responding to a fourth touch operation aiming at other operation controls and performing an operation matched with the other operation controls on the target image;
wherein the other operation control is at least one of the following:
the control is saved, shared and edited.
The specific functions of each unit in the above device may be described with reference to the above method, and will not be described herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, for example, a video processing method. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM803 and executed by computing unit 801, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the video processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (22)

1. A video processing method, comprising:
playing a target video in a video playing area of a current display interface;
displaying at least part of video frames in the target video frame sequence under the condition that the playing of the target video is stopped or the playing of the target video is determined to be finished, so that after the playing of the target video is stopped or the playing of the target video is finished, actively providing at least part of video frames in the target video frame sequence for displaying, and enabling a user to select a required target image from the displayed video frames; wherein the target video frame sequence is selected from the target video;
The actively provided target video frame sequence is displayed in one of the following modes:
playing the target video frame sequence in the video playing area at a playing speed smaller than that of other video frames in the target video under the condition that video frames contained in the target video frame sequence are continuous frames, so as to slowly display the target video frame sequence; or,
and playing the highlight corresponding to the target video frame sequence in the video playing area at a playing speed smaller than that of other video frames in the target video under the condition that the video frames contained in the target video frame sequence are discontinuous frames, so as to slowly display the highlight.
2. The method of claim 1, wherein the target video frame sequence is derived based on at least one of:
the target video frame sequence is obtained based on the user behavior characteristics corresponding to the target video;
the target video frame sequence is obtained based on the image characteristic information of video frames in the target video;
the sequence of target video frames is derived based on highlight clips contained in the target video.
3. The method of claim 1 or 2, wherein the presenting at least a portion of the video frames in the sequence of target video frames comprises:
at least a portion of the video frames in the sequence of target video frames are presented in an image presentation area of the current display interface.
4. A method according to claim 3, wherein the positional relationship of the image presentation area and the video playing area is any one of:
the image display area is the same as the video playing area;
the image display area covers the video playing area, and the display area of the image display area is larger than that of the video playing area;
and the image display area is positioned in other areas except the video playing area in the display interface.
5. The method of claim 1 or 2, wherein the presenting at least a portion of the video frames in the sequence of target video frames comprises:
and jumping from the current display interface to the next display interface, and displaying at least part of video frames in the target video frame sequence in an image display area of the next display interface.
6. The method of claim 1 or 2, wherein the presenting at least a portion of the video frames in the sequence of target video frames comprises:
Displaying all video frames in the target video frame sequence in thumbnail form; or,
a portion of the video frames in the sequence of target video frames are shown in thumbnail form.
7. The method of claim 6, further comprising:
and under the condition that all video frames in the target video frame sequence are not displayed, responding to a sliding operation, rolling and displaying the video frames in the target video frame sequence in an image display area of a current display interface so as to adjust the display positions of the video frames in the target video frame sequence in the image display area, and displaying at least part of the video frames in the target video frame sequence which are not displayed in the image display area so as to enable a user to select a required target image from the displayed video frames.
8. The method of claim 1 or 2, further comprising:
and responding to a third touch operation aiming at the displayed video frame, taking the video frame selected by the third touch operation as a target image, and magnifying and displaying the target image so that the display area of the magnified target image is larger than that of the original video frame.
9. The method of claim 1 or 2, further comprising:
And under the condition that the target image is selected, displaying other operation controls aiming at the target image in a current display interface so as to enable a user to perform other operations on the target image.
10. The method of claim 9, further comprising:
responding to fourth touch operation for other operation controls, and performing operation matched with the other operation controls on the target image;
wherein the other operation control is at least one of the following:
the control is saved, shared and edited.
11. A video processing apparatus comprising:
the playing unit is used for playing the target video in the video playing area of the current display interface;
the display unit is used for displaying at least part of video frames in the target video frame sequence under the condition that the playing of the target video is stopped or the target video is determined to be finished, so that after the playing of the target video is stopped or the playing of the target video is finished, at least part of video frames in the target video frame sequence are actively provided for display, and a user can select a required target image from the displayed video frames; wherein the target video frame sequence is selected from the target video;
The playing unit is further configured to actively provide a target video frame sequence, and display the target video frame sequence in one of the following manners: playing the target video frame sequence in the video playing area at a playing speed smaller than that of other video frames in the target video under the condition that video frames contained in the target video frame sequence are continuous frames, so as to slowly display the target video frame sequence; or,
and playing the highlight corresponding to the target video frame sequence in the video playing area at a playing speed smaller than that of other video frames in the target video under the condition that the video frames contained in the target video frame sequence are discontinuous frames, so as to slowly display the highlight.
12. The apparatus of claim 11, wherein the target video frame sequence is derived based on at least one of:
the target video frame sequence is obtained based on the user behavior characteristics corresponding to the target video;
the target video frame sequence is obtained based on the image characteristic information of video frames in the target video;
the sequence of target video frames is derived based on highlight clips contained in the target video.
13. The device according to claim 11 or 12, wherein,
the display unit is specifically configured to display at least part of video frames in the target video frame sequence in an image display area of the current display interface.
14. The apparatus of claim 13, wherein the positional relationship of the image presentation area and the video playback area is any one of:
the image display area is the same as the video playing area;
the image display area covers the video playing area, and the display area of the image display area is larger than that of the video playing area;
and the image display area is positioned in other areas except the video playing area in the display interface.
15. The device according to claim 11 or 12, wherein,
the display unit is specifically configured to skip from a current display interface to a next display interface, and display at least part of video frames in the target video frame sequence in an image display area of the next display interface.
16. The device according to claim 11 or 12, wherein,
the display unit is specifically configured to display all video frames in the target video frame sequence in a thumbnail form; alternatively, portions of the video frames in the sequence of target video frames are presented in thumbnail form.
17. The apparatus of claim 16, further comprising:
and the first operation processing unit is used for responding to the sliding operation under the condition that all video frames in the target video frame sequence are not displayed, scrolling and displaying the video frames in the target video frame sequence in an image display area of a current display interface so as to adjust the display positions of the video frames in the target video frame sequence in the image display area, and displaying at least part of the video frames in the target video frame sequence which are not displayed in the image display area so as to enable a user to select a required target image from the displayed video frames.
18. The apparatus of claim 11 or 12, further comprising:
and the second operation processing unit is used for responding to a third touch operation on the displayed video frame, taking the video frame selected by the third touch operation as a target image, and amplifying and displaying the target image so that the display area of the amplified target image is larger than that of the original video frame.
19. The device according to claim 11 or 12, wherein,
and the display unit is also used for displaying other operation controls aiming at the target image in the current display interface under the condition that the target image is selected so as to enable a user to perform other operations on the target image.
20. The apparatus of claim 19, further comprising:
the third operation processing unit is used for responding to a fourth touch operation aiming at other operation controls and performing an operation matched with the other operation controls on the target image;
wherein the other operation control is at least one of the following:
the control is saved, shared and edited.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
CN202111466837.1A 2021-12-03 2021-12-03 Video processing method, device, equipment and storage medium Active CN114173177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111466837.1A CN114173177B (en) 2021-12-03 2021-12-03 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111466837.1A CN114173177B (en) 2021-12-03 2021-12-03 Video processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114173177A CN114173177A (en) 2022-03-11
CN114173177B true CN114173177B (en) 2024-03-19

Family

ID=80482721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111466837.1A Active CN114173177B (en) 2021-12-03 2021-12-03 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114173177B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618788A (en) * 2014-12-29 2015-05-13 北京奇艺世纪科技有限公司 Method and device for displaying video information
KR20160044981A (en) * 2014-10-16 2016-04-26 삼성전자주식회사 Video processing apparatus and method of operations thereof
CN105812892A (en) * 2014-12-29 2016-07-27 深圳Tcl数字技术有限公司 Method, device and system for obtaining screenshot of dynamic display picture of television
CN110719527A (en) * 2019-09-30 2020-01-21 维沃移动通信有限公司 Video processing method, electronic equipment and mobile terminal
CN110855557A (en) * 2019-11-14 2020-02-28 腾讯科技(深圳)有限公司 Video sharing method and device and storage medium
WO2020172826A1 (en) * 2019-02-27 2020-09-03 华为技术有限公司 Video processing method and mobile device
CN111954087A (en) * 2020-08-20 2020-11-17 腾讯科技(深圳)有限公司 Method and device for intercepting images in video, storage medium and electronic equipment
CN112565910A (en) * 2020-12-15 2021-03-26 四川长虹电器股份有限公司 Video dynamic speed-regulating playing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649668B2 (en) * 2011-06-03 2014-02-11 Adobe Systems Incorporated Client playback of streaming video adapted for smooth transitions and viewing in advance display modes
US9236088B2 (en) * 2013-04-18 2016-01-12 Rapt Media, Inc. Application communication
US10572735B2 (en) * 2015-03-31 2020-02-25 Beijing Shunyuan Kaihua Technology Limited Detect sports video highlights for mobile computing devices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160044981A (en) * 2014-10-16 2016-04-26 삼성전자주식회사 Video processing apparatus and method of operations thereof
CN104618788A (en) * 2014-12-29 2015-05-13 北京奇艺世纪科技有限公司 Method and device for displaying video information
CN105812892A (en) * 2014-12-29 2016-07-27 深圳Tcl数字技术有限公司 Method, device and system for obtaining screenshot of dynamic display picture of television
WO2020172826A1 (en) * 2019-02-27 2020-09-03 华为技术有限公司 Video processing method and mobile device
CN110719527A (en) * 2019-09-30 2020-01-21 维沃移动通信有限公司 Video processing method, electronic equipment and mobile terminal
CN110855557A (en) * 2019-11-14 2020-02-28 腾讯科技(深圳)有限公司 Video sharing method and device and storage medium
CN111954087A (en) * 2020-08-20 2020-11-17 腾讯科技(深圳)有限公司 Method and device for intercepting images in video, storage medium and electronic equipment
CN112565910A (en) * 2020-12-15 2021-03-26 四川长虹电器股份有限公司 Video dynamic speed-regulating playing method

Also Published As

Publication number Publication date
CN114173177A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
US10725734B2 (en) Voice input apparatus
WO2020151547A1 (en) Interaction control method for display page, and device
KR20220130197A (en) Filming method, apparatus, electronic equipment and storage medium
US20160342319A1 (en) Method and device for previewing and displaying multimedia streaming data
WO2017014800A1 (en) Video editing on mobile platform
US20140380375A1 (en) Page turning method, page turning apparatus and terminal as well as computer readable medium
CN112261226A (en) Horizontal screen interaction method and device, electronic equipment and storage medium
US11482257B2 (en) Image display method and apparatus
CN112423084B (en) Display method and device of hotspot list, electronic equipment and storage medium
US10115431B2 (en) Image processing device and image processing method
US20190230311A1 (en) Video interface display method and apparatus
US20210274106A1 (en) Video processing method, apparatus, and device and storage medium
CN106921883B (en) Video playing processing method and device
CN112653920B (en) Video processing method, device, equipment and storage medium
US20160077726A1 (en) User interface based interaction method and related apparatus
EP4300980A1 (en) Video processing method and apparatus, and electronic device and storage medium
CN110633380B (en) Control method and device for picture processing interface, electronic equipment and readable medium
CN113727170A (en) Video interaction method, device, equipment and medium
JP2023551670A (en) Page switching display method, device, storage medium and electronic equipment
CN110781349A (en) Method, equipment, client device and electronic equipment for generating short video
CN114173177B (en) Video processing method, device, equipment and storage medium
CN111757177B (en) Video clipping method and device
CN114153346A (en) Picture processing method and device, storage medium and electronic equipment
EP3054388A1 (en) Apparatus and method for processing animation
CN112579932A (en) Page display method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant