CN117097954A - Video processing method, device, medium and equipment - Google Patents

Video processing method, device, medium and equipment Download PDF

Info

Publication number
CN117097954A
CN117097954A CN202311182530.8A CN202311182530A CN117097954A CN 117097954 A CN117097954 A CN 117097954A CN 202311182530 A CN202311182530 A CN 202311182530A CN 117097954 A CN117097954 A CN 117097954A
Authority
CN
China
Prior art keywords
video
frame images
preview area
target
scaling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311182530.8A
Other languages
Chinese (zh)
Inventor
马超骏
彭浩
赵充
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Biscuit Technology Co ltd
Original Assignee
Beijing Biscuit Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Biscuit Technology Co ltd filed Critical Beijing Biscuit Technology Co ltd
Priority to CN202311182530.8A priority Critical patent/CN117097954A/en
Publication of CN117097954A publication Critical patent/CN117097954A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Abstract

The application discloses a video processing method, a device, a medium and equipment, wherein the method comprises the following steps: receiving a target video; generating a plurality of groups of frame images corresponding to the target video based on the target video; determining a key frame image based on a scaling of a video preview area in a video editor and a plurality of groups of frame images; and displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area. The application reduces the performance problem, improves the user experience and ensures the quality and accuracy of the preview by means of real-time transcoding and keyframe interception.

Description

Video processing method, device, medium and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method, apparatus, medium, and device.
Background
When the existing online video clip editor generates a video preview on a time axis, a mode of analyzing video in real time and cutting frames is generally adopted.
This method has the following drawbacks: first, parsing video and capturing frames in real time consumes a significant amount of computing power, resulting in slower speed when generating preview images on lower performance computers. Second, since video is parsed in real time, the editor needs to constantly load video data, which can increase the burden of network transmission, especially for large video files.
Disclosure of Invention
The application provides a video processing method, a video processing device, a video processing medium and a video processing storage device, which are used for solving the problems in the related art.
In a first aspect, the present application provides a video processing method, including:
receiving a target video;
generating a plurality of groups of frame images corresponding to the target video based on the target video;
determining a key frame image based on a scaling of a video preview area in a video editor and a plurality of groups of frame images;
and displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area.
Optionally, the generating, based on the target video, a plurality of sets of frame images corresponding to the target video includes:
transcoding the target video to obtain a transcoded target video;
and generating a plurality of groups of frame images based on the transcoded target video.
Optionally, the generating a plurality of groups of frame images based on the transcoded target video includes:
cutting frames of the transcoded target video according to seconds to obtain all frame images corresponding to the target video;
grouping all the frame images to obtain a plurality of groups of frame images.
Optionally, the determining the key frame image based on the scaling of the video preview area in the video editor and the plurality of sets of frame images includes:
obtaining the scaling of a video preview area in the video editor;
and selecting an image matched with the scaling of the video preview area in the video editor from the plurality of groups of frame images as the key frame image.
Optionally, the selecting, from the plurality of sets of frame images, an image that matches a scaling of a video preview area in the video editor as the key frame image includes:
acquiring a time axis of a video preview area and an image display width corresponding to a time point in the time axis based on a scaling of the video preview area in the video editor;
and acquiring images matched with the time points in the time axis from the plurality of groups of frame images as the key frame images.
Optionally, the displaying the key frame image in the video preview area to display the target video with different scaling in the video preview area includes:
and displaying the key frame images at the time points in the time axis according to the image display width so as to display target videos with different scaling scales in a video preview area.
Optionally, the method further comprises:
and calculating the image display width according to the scaling and the total duration of the time axis.
In a second aspect, the present application provides a video processing apparatus comprising:
the receiving module is used for receiving the target video;
the generation module is used for generating a plurality of groups of frame images corresponding to the target video based on the target video;
the determining module is used for determining key frame images based on the scaling of the video preview area in the video editor and a plurality of groups of frame images;
and the display module is used for displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a video processing method provided in the first aspect when executing the program.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a video processing method provided in the first aspect.
The application discloses a video processing method, a device, a medium and equipment, wherein the method comprises the following steps: receiving a target video; generating a plurality of groups of frame images corresponding to the target video based on the target video; determining a key frame image based on a scaling of a video preview area in a video editor and a plurality of groups of frame images; and displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area. The application reduces the performance problem, improves the user experience and ensures the quality and accuracy of the preview by means of real-time transcoding and keyframe interception.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a video processing method provided in the present application;
FIG. 2 is a schematic diagram of a set of frame images provided in the present application;
FIG. 3 is a schematic diagram of a first application of video processing provided in the present application;
FIG. 4 is a second application diagram of video processing provided in the present application;
fig. 5 is a schematic diagram of an electronic device corresponding to fig. 1 according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic flow chart of a video processing method provided in the present application, the method includes the following steps:
s101: receiving a target video;
s102: and generating a plurality of groups of frame images corresponding to the target video based on the target video.
Aiming at the target video, generating a plurality of groups of frame images corresponding to the target video, firstly transcoding the target video to obtain a transcoded target video, and then generating a plurality of groups of frame images based on the transcoded target video.
Optionally, the generating a plurality of groups of frame images based on the transcoded target video includes: and cutting frames of the transcoded target video according to seconds, obtaining all frame images corresponding to the target video, and grouping the frame images to obtain a plurality of groups of frame images.
For example, after uploading the target video, the user transcodes the video in real time, and converts the video into a format suitable for online editing, where the format of the transcoded target video is not particularly limited, and transcoding may be performed according to needs, for example, by using nodejs to tune ffmpeg, and the transcoded format is h264 supported by the browser.
After the transcoded target video is obtained, frames can be cut off in seconds through ffmpeg, and a group of 10 cut-off frames are used for generating and storing the picture shown in fig. 2.
S103: a key frame image is determined based on the scaling of the video preview area and the plurality of sets of frame images in the video editor.
And determining a key frame image according to the scaling of the video preview area in the video editor and a plurality of groups of frame images, wherein the scaling of the video preview area in the video editor is required to be acquired first, and then an image matched with the scaling of the video preview area in the video editor is selected from the plurality of groups of frame images to be used as the key frame image.
Wherein selecting, from the plurality of sets of frame images, an image matching a scaling of a video preview area in the video editor as the key frame image includes: acquiring a time axis of a video preview area and an image display width corresponding to a time point in the time axis based on a scaling of the video preview area in the video editor; and acquiring images matched with the time points in the time axis from the plurality of groups of frame images as the key frame images.
And calculating the image display width according to the scaling and the total duration of the time axis.
Specifically, since the scales of the video preview areas in the video editor are different, the number of images that can be displayed in the video preview areas is different, and as shown in fig. 3 and 4, the number of images displayed in the video preview areas of fig. 3 is smaller than the number that can be displayed in the video preview areas of fig. 4. Therefore, it is necessary to determine displayable images, i.e., key frame images, based on the scale of the video preview area.
As shown in fig. 3, the time points displayed on the time axis of the video preview area are 0 seconds to 1 minute 10 seconds, and each time point is spaced 10 seconds apart. Then, images matching with the respective time points are acquired from the plurality of sets of frame images based on the respective time points on the time axis, wherein the image closest to the time point may be the image. If the time point displayed on the time axis is 20 seconds, the image with the time point of 20 or the closest image to 20 in the plurality of groups of frame images is used as the key frame, and is displayed on the time axis at the time point of 20 seconds.
Of course, since the image display width of the video preview area may be different from the original display width of each frame image in the plurality of sets of frame images, after obtaining the key frame image corresponding to a certain time point, the key frame image needs to be adjusted to the image display width of the video preview area, so that the key frame image is displayed at the certain time point with the image display width.
The embodiment adopts the mode of real-time transcoding and key frame interception to generate the preview image, so that the quality and the accuracy of the preview image can be ensured. By means of real-time transcoding and preview image interception according to key frames, calculation consumption of real-time analysis of video and frame interception is avoided, and performance problems caused by preview image generation are greatly reduced. The user can quickly load and preview the preview image when editing the video, so that the response speed of the editor is improved.
S104: and displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area.
Aiming at the time point that the key frame image is displayed in the video preview area so as to display target videos with different scaling scales in the video preview area, the key frame image is required to be displayed in the time axis according to the image display width so as to display the target videos with different scaling in the video preview area.
The application discloses a video processing method, which comprises the following steps: receiving a target video; generating a plurality of groups of frame images corresponding to the target video based on the target video; determining a key frame image based on a scaling of a video preview area in a video editor and a plurality of groups of frame images; and displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area. The application reduces the performance problem, improves the user experience and ensures the quality and accuracy of the preview by means of real-time transcoding and keyframe interception.
The foregoing provides a video processing method according to one or more embodiments of the present application, and based on the same concept, the present application further provides a corresponding video processing apparatus, including:
the receiving module is used for receiving the target video;
the generation module is used for generating a plurality of groups of frame images corresponding to the target video based on the target video;
the determining module is used for determining key frame images based on the scaling of the video preview area in the video editor and a plurality of groups of frame images;
and the display module is used for displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area.
Optionally, the generating module is further configured to transcode the target video to obtain a transcoded target video;
and generating a plurality of groups of frame images based on the transcoded target video.
Optionally, the generating module is further configured to cut frames of the transcoded target video according to seconds, and obtain all frame images corresponding to the target video;
grouping all the frame images to obtain a plurality of groups of frame images.
Optionally, the determining module is further configured to obtain a scaling of a video preview area in the video editor;
and selecting an image matched with the scaling of the video preview area in the video editor from the plurality of groups of frame images as the key frame image.
Optionally, the determining module is further configured to obtain a time axis of the video preview area and an image display width corresponding to a time point in the time axis based on a scaling of the video preview area in the video editor;
and acquiring images matched with the time points in the time axis from the plurality of groups of frame images as the key frame images.
Optionally, the display module is further configured to display the key frame image at a time point in the time axis according to the image display width, so as to display target videos with different scaling scales in a video preview area.
Optionally, the device further comprises a calculation module, wherein the calculation module is used for calculating the image display width according to the scaling and the total duration of the time axis.
The present application also provides a computer readable medium storing a computer program operable to perform the method provided in fig. 1 above.
The application also provides a schematic block diagram of the electronic device shown in fig. 5, which corresponds to fig. 1. At the hardware level, as shown in fig. 5, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile storage, and may of course include hardware required by other services. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement a model loading method as described above with respect to fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer media including memory storage devices.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A video processing method, comprising
Receiving a target video;
generating a plurality of groups of frame images corresponding to the target video based on the target video;
determining a key frame image based on a scaling of a video preview area in a video editor and a plurality of groups of frame images;
and displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area.
2. The method according to claim 1, wherein generating a plurality of sets of frame images corresponding to the target video based on the target video comprises:
transcoding the target video to obtain a transcoded target video;
and generating a plurality of groups of frame images based on the transcoded target video.
3. The method of video processing according to claim 2, wherein generating a plurality of sets of frame images based on the transcoded target video includes:
cutting frames of the transcoded target video according to seconds to obtain all frame images corresponding to the target video;
grouping all the frame images to obtain a plurality of groups of frame images.
4. The video processing method of claim 1, wherein the determining key frame images based on the scaling of the video preview region in the video editor and the plurality of sets of frame images comprises:
obtaining the scaling of a video preview area in the video editor;
and selecting an image matched with the scaling of the video preview area in the video editor from the plurality of groups of frame images as the key frame image.
5. The method according to claim 4, wherein selecting an image matching a scale of a video preview area in the video editor from the plurality of sets of frame images as the key frame image comprises:
acquiring a time axis of a video preview area and an image display width corresponding to a time point in the time axis based on a scaling of the video preview area in the video editor;
and acquiring images matched with the time points in the time axis from the plurality of groups of frame images as the key frame images.
6. The method of claim 5, wherein displaying the key frame image in the video preview area to achieve displaying the target video at different scales in the video preview area comprises:
and displaying the key frame images at the time points in the time axis according to the image display width so as to display target videos with different scaling scales in a video preview area.
7. The video processing method of claim 6, wherein the method further comprises:
and calculating the image display width according to the scaling and the total duration of the time axis.
8. A video processing apparatus, comprising:
the receiving module is used for receiving the target video;
the generation module is used for generating a plurality of groups of frame images corresponding to the target video based on the target video;
the determining module is used for determining key frame images based on the scaling of the video preview area in the video editor and a plurality of groups of frame images;
and the display module is used for displaying the key frame image in the video preview area so as to display target videos with different scaling scales in the video preview area.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
CN202311182530.8A 2023-09-13 2023-09-13 Video processing method, device, medium and equipment Pending CN117097954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311182530.8A CN117097954A (en) 2023-09-13 2023-09-13 Video processing method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311182530.8A CN117097954A (en) 2023-09-13 2023-09-13 Video processing method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN117097954A true CN117097954A (en) 2023-11-21

Family

ID=88773443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311182530.8A Pending CN117097954A (en) 2023-09-13 2023-09-13 Video processing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN117097954A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112653918A (en) * 2020-12-15 2021-04-13 咪咕文化科技有限公司 Preview video generation method and device, electronic equipment and storage medium
CN113923504A (en) * 2021-12-02 2022-01-11 阿里巴巴达摩院(杭州)科技有限公司 Video preview moving picture generation method and device
CN114125531A (en) * 2021-08-31 2022-03-01 游艺星际(北京)科技有限公司 Video preview method, device, terminal and storage medium
CN115834980A (en) * 2022-10-12 2023-03-21 广州维梦科技有限公司 Video thumbnail display method and system
WO2023093687A1 (en) * 2021-11-25 2023-06-01 北京字跳网络技术有限公司 Video processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112653918A (en) * 2020-12-15 2021-04-13 咪咕文化科技有限公司 Preview video generation method and device, electronic equipment and storage medium
CN114125531A (en) * 2021-08-31 2022-03-01 游艺星际(北京)科技有限公司 Video preview method, device, terminal and storage medium
WO2023093687A1 (en) * 2021-11-25 2023-06-01 北京字跳网络技术有限公司 Video processing method and device
CN113923504A (en) * 2021-12-02 2022-01-11 阿里巴巴达摩院(杭州)科技有限公司 Video preview moving picture generation method and device
CN115834980A (en) * 2022-10-12 2023-03-21 广州维梦科技有限公司 Video thumbnail display method and system

Similar Documents

Publication Publication Date Title
CN112364277B (en) Webpage loading method and device
CN108848244B (en) Page display method and device
CN110263050B (en) Data processing method, device, equipment and storage medium
CN115828162B (en) Classification model training method and device, storage medium and electronic equipment
CN113079201B (en) Information processing system, method, device and equipment
CN112364074B (en) Time sequence data visualization method, equipment and medium
CN116048977B (en) Test method and device based on data reduction
CN117097954A (en) Video processing method, device, medium and equipment
CN116245051A (en) Simulation software rendering method and device, storage medium and electronic equipment
CN110704742B (en) Feature extraction method and device
CN112000329B (en) Data display method, device, equipment and medium
CN116821647B (en) Optimization method, device and equipment for data annotation based on sample deviation evaluation
CN110659372A (en) Picture input and access method, device and equipment
CN117455015B (en) Model optimization method and device, storage medium and electronic equipment
CN117177002A (en) Video playing method, device, medium and equipment
CN110262732B (en) Picture moving method and device
CN117348999B (en) Service execution system and service execution method
CN115827310B (en) Information verification method and device, storage medium and electronic equipment
CN116881724B (en) Sample labeling method, device and equipment
CN110209746B (en) Data processing method and device for data warehouse
CN117437327A (en) Method and device for generating design material, storage medium and electronic equipment
CN116893892A (en) Method, equipment and medium for dynamically generating message queue based on service model
CN117014560A (en) Video scheduling method, device, medium and equipment
CN117372578A (en) Animation generation method and device, storage medium and electronic equipment
CN116401652A (en) Recommendation method, device and equipment for verification mode and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination