CN111369434A - Method, device and equipment for generating cover of spliced video and storage medium - Google Patents

Method, device and equipment for generating cover of spliced video and storage medium Download PDF

Info

Publication number
CN111369434A
CN111369434A CN202010091783.4A CN202010091783A CN111369434A CN 111369434 A CN111369434 A CN 111369434A CN 202010091783 A CN202010091783 A CN 202010091783A CN 111369434 A CN111369434 A CN 111369434A
Authority
CN
China
Prior art keywords
video
target
materials
target image
cover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010091783.4A
Other languages
Chinese (zh)
Other versions
CN111369434B (en
Inventor
罗超
高杰杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202010091783.4A priority Critical patent/CN111369434B/en
Publication of CN111369434A publication Critical patent/CN111369434A/en
Application granted granted Critical
Publication of CN111369434B publication Critical patent/CN111369434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The application discloses a method, a device, equipment and a storage medium for generating a cover of a spliced video, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a plurality of video materials selected by a user; determining a plurality of target video materials from the plurality of video materials; selecting at least one target image frame from each target video material; and carrying out picture splicing processing on the selected multiple target image frames to obtain spliced video covers. By the method, the generated spliced video cover can express more information of the spliced video, and a user can roughly know the content of the spliced video by watching the spliced video cover.

Description

Method, device and equipment for generating cover of spliced video and storage medium
Technical Field
The application relates to the technical field of image processing, in particular to a method, a device, equipment and a storage medium for generating a cover of a spliced video.
Background
Nowadays, the functions of audio-video software are more and more abundant. For example, some audio-video software has a function of making a spliced video. When the spliced video is manufactured, a user firstly selects a plurality of video materials and then clicks a splicing option to generate the spliced video. In addition, the user needs to upload a picture as a stitched video cover.
In the related art, a picture uploaded by a user is used as a spliced video cover, and the information of a spliced video expressed by the determined spliced video cover is very limited. It is not very good for the viewing user to have a general understanding of the content of the stitched video by viewing the stitched video cover.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for generating a cover of a spliced video, which can solve the technical problems in the related art. The technical scheme of the method, the device, the equipment and the storage medium for generating the spliced video cover is as follows:
in a first aspect, a method for generating a stitched video cover is provided, the method comprising:
acquiring a plurality of video materials selected by a user;
determining a plurality of target video materials from the plurality of video materials;
selecting at least one target image frame from each target video material;
and carrying out picture splicing processing on the selected multiple target image frames to obtain spliced video covers.
In one possible implementation, the determining a plurality of target video materials among the plurality of video materials includes:
and determining the first N video materials selected by the user from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the determining a plurality of target video materials among the plurality of video materials includes:
and determining the first N video materials with the highest playing quantity in the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the selecting at least one target image frame in each target video material includes:
and selecting a cover page of each target video material as a target image frame.
In a possible implementation manner, the performing picture stitching processing on the selected multiple target image frames to obtain a stitched video cover includes:
and according to the selection sequence of the video materials corresponding to the target image frames, carrying out image splicing processing on the target image frames to obtain the spliced video cover.
In a possible implementation manner, after the picture stitching processing is performed on the plurality of selected target image frames to obtain a stitched video cover, the method further includes:
when detecting that the video material corresponding to the first target image frame in the spliced video cover is deleted, selecting a second target image frame from the video materials except the plurality of target video materials;
replacing the first target image frame in the stitched video cover with the second target image frame.
In a possible implementation manner, after the obtaining of the plurality of video materials selected by the user, the method further includes:
and carrying out video splicing processing on the plurality of video materials to obtain a spliced video.
In a second aspect, an apparatus for stitched video cover generation is provided, the apparatus comprising:
the acquisition module is used for acquiring a plurality of video materials selected by a user;
a determining module for determining a plurality of target video materials among the plurality of video materials;
the selecting module is used for selecting at least one target image frame from each target video material;
and the splicing module is used for carrying out picture splicing processing on the selected multiple target image frames to obtain spliced video covers.
In one possible implementation manner, the determining module is configured to:
and determining the first N video materials selected by the user from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation manner, the determining module is configured to:
and determining the first N video materials with the highest playing quantity in the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In a possible implementation manner, the selecting module is configured to:
and selecting a cover page of each target video material as a target image frame.
In one possible implementation manner, the splicing module is configured to:
and according to the selection sequence of the video materials corresponding to the target image frames, carrying out image splicing processing on the target image frames to obtain the spliced video cover.
In one possible implementation, the apparatus further includes a replacement module configured to:
when detecting that the video material corresponding to the first target image frame in the spliced video cover is deleted, selecting a second target image frame from the video materials except the plurality of target video materials;
replacing the first target image frame in the stitched video cover with the second target image frame.
In one possible implementation manner, the apparatus further includes a video stitching module configured to:
and carrying out video splicing processing on the plurality of video materials to obtain a spliced video.
In a third aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, the at least one instruction being loaded and executed by the processor to implement the method for stitched video cover generation as described in the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, having stored therein at least one instruction, which is loaded and executed by a processor, to implement the method of stitched video cover generation as described in the first aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
according to the method for generating the spliced video cover, a plurality of target video materials are determined from a plurality of video materials selected by a user, at least one target image frame is selected from each target video material, and finally, the selected target image frames are subjected to image splicing processing to obtain the spliced video cover. Therefore, the generated spliced video cover contains the information of the video materials, and the information quantity of the expressed spliced video is large, so that the watching user can roughly know the content of the spliced video by watching the spliced video cover.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for stitched video cover generation provided by an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an apparatus for generating a stitched video cover according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a spliced video provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a stitched video cover provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of another stitched video cover provided in embodiments of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a method for generating a cover of a spliced video, which can be realized by a terminal or a server. The terminal can be a mobile terminal such as a mobile phone, a tablet computer and a notebook computer, and can also be a fixed terminal such as a desktop computer. The server may be a single server or a cluster of servers.
As shown in fig. 1, the processing flow of the method for generating a stitched video cover may include the following steps:
in step 101, a plurality of video materials selected by a user are obtained.
In an implementation, when a user wants to make a spliced video, an application program that can make the spliced video can be opened on the terminal. After the application program is opened, corresponding operation can be carried out, so that the terminal displays the video material library, and a user can select a plurality of video materials in the video material library. In addition, the minimum video material quantity selected by the user can be set, and when the quantity of the video materials selected by the user is smaller than the minimum video material quantity, the user is prompted to continue selecting the video materials.
In a possible implementation manner, the video material library is a local video material library, and when the method provided by the embodiment of the present application is executed by a server (the server may be a background server of an application), the terminal may send the video material selected by the user in the local video material library to the server. When the method provided by the embodiment of the application is executed by the terminal, the terminal can directly acquire a plurality of video materials from the local video material library.
In another possible implementation manner, the video material library is an online video material library, and when the method provided by the embodiment of the present application is executed by a server, a terminal may send an identifier of a video material selected by a user from the online video material library to the server, and the server may obtain a corresponding video material from the online video material library based on the received identifier. When the method provided by the embodiment of the application is executed by the terminal, the terminal can acquire the video material selected by the user from the online video material library.
It should be noted that after the multiple video materials selected by the user are obtained, video splicing processing may be performed to obtain a spliced video. When splicing, video splicing processing can be performed according to the selection sequence of the video materials to obtain a spliced video.
The flow of the video splicing process can be as shown in fig. 5, where the video material N indicates that the video material is selected nth, and the spliced video is spliced according to the selection sequence of the video materials.
In step 102, a plurality of target video materials are determined among a plurality of video materials.
In implementation, after a plurality of video materials selected by a user are acquired, a plurality of target video materials can be determined from the plurality of video materials, and a stitched video cover is generated by using image frames in the target video materials.
The rules for determining the plurality of target video materials among the plurality of video materials may be as follows:
in one possible implementation, the first N video materials selected by the user are determined among the plurality of video materials as the plurality of target video materials.
Where N is a positive integer, and N should be less than the minimum video material amount. For example, N may be 4.
In implementation, when the user selects the video material, the video material is selected according to a certain selection sequence. When the target video material is determined, the video material with the top N sequence selected from the plurality of video materials may be determined as the target video material.
For example, if the user selects 10 video materials, the first, second, third and fourth selected video materials may be determined as the target video materials.
In another possible implementation manner, the first N video materials with the highest playing amount are determined among the plurality of video materials as the plurality of target video materials.
In practice, if the video material is selected from an online video material library, the online video material library will also have a corresponding amount of video material stored therein. The first N (e.g., the first 4) video materials with the highest playability among the plurality of video materials may be determined as the target video materials.
The first N video materials with the highest playing quantity are determined as the target video materials, so that the finally obtained spliced video cover contains the information of the target video materials, the attraction of the spliced video can be improved, and the playing quantity of the spliced video is increased.
In another possible implementation manner, the N video materials are uniformly selected from the plurality of video materials as the plurality of target video materials according to the selection order.
In implementation, the target video material may be selected uniformly in the plurality of video materials according to the selection order. For example, if the user selects 10 video materials and N is 4, it can be determined that the video materials having the selection order of 1, 4, 7, and 10 are the target video materials.
In another possible implementation, N video materials are randomly selected from the plurality of video materials as the target video material.
In another possible implementation manner, the target video material may also be determined by the user, that is, when the user selects the video material, the target video material may also be marked, and then the terminal or the server may determine the target video material according to the mark in the video material.
In step 103, at least one target image frame is selected in each target video material.
One target image frame may be selected from each target video material, and the target image frame may be any image frame of the target video material, for example, may be a cover page of the target video material.
In implementation, the cover page of each target video material may be taken as a target image frame. Because the covers of the video materials are generally selected image frames which are suitable for being used as covers, and other users can know the video materials in the video material library through the covers, the covers of a plurality of video materials in the spliced video are selected as target image frames, so that the information of the video materials contained in the spliced video can be more clearly expressed.
In step 104, the selected target image frames are subjected to image stitching processing to obtain a stitched video cover.
In an implementation, when performing picture stitching processing on a plurality of target image frames, the plurality of target image frames may be stitched along a horizontal direction (as shown in fig. 6 and 7), or the plurality of target image frames may be stitched along a vertical direction, or the plurality of target image frames may be stitched in multiple rows and multiple columns (for example, two rows and two columns), which is not limited in this embodiment of the present invention.
When the target image frames are spliced, the target image frames need to be spliced according to a certain splicing sequence, particularly in a manner of splicing along the horizontal direction and the vertical direction. The determination principle of the splicing order can be as follows:
in a possible implementation manner, according to the selection sequence of the video materials corresponding to the plurality of target image frames, the plurality of target image frames are subjected to image splicing processing, and a spliced video cover is obtained.
The selection sequence of the video materials corresponding to the target image frames, that is, the sequence in which the user selects the video materials, is described.
In implementation, as shown in fig. 6, video material 1 represents the video material that is the 1 st selected video material by the user. The selected target video materials are the first 4 video materials selected by the user, the target image frame selected from the target video materials is the cover of the video materials, the covers of the video materials are spliced in the horizontal direction, and the selection sequence of the video materials corresponding to the covers is gradually increased along the sequence from left to right.
In another possible implementation manner, according to the order of the playing amount of the video material corresponding to the plurality of target image frames, the plurality of target image frames are subjected to image splicing processing, so that a spliced video cover is obtained.
In implementation, the image stitching processing may be performed on the plurality of target image frames according to the size sequence of the playing amount of the video material corresponding to the target image frames.
For example, each target image frame is processed by stitching along the horizontal direction, and the playing amount of the video material corresponding to each target image frame gradually decreases along the sequence from left to right.
In another possible implementation manner, the multiple target image frames may be subjected to image stitching processing in a random order.
In addition, after the generation of the stitched video cover, if a certain video material in the video material library is deleted, which may cause that the corresponding target image frame in the stitched video cover cannot be displayed normally, in a possible implementation manner, when it is detected that the video material corresponding to the first target image frame in the stitched video cover is deleted, a second target image frame is selected from the video materials except for the plurality of target video materials. And replacing the first target image frame in the spliced video cover with the second target image frame. That is, if a certain target video material is deleted, a target video material is reselected from the non-target video materials, and a target image frame is selected from the target video material to replace the first target image frame in the stitched video cover.
In another possible implementation manner, when it is detected that the video material corresponding to the first target image frame in the stitched video cover is deleted, a set picture is obtained, and the first target image frame in the stitched video cover is replaced by the set picture. That is, if some target video material is deleted, the first target image frame in the stitched video cover is replaced with the default setting picture.
It should be noted that if a certain video material is deleted, corresponding processing needs to be performed on the spliced video, and specifically, each video material may be subjected to splicing again to obtain a new spliced video. Alternatively, the deleted portion of the video material may be replaced with a default piece of video. Then, the obtained new spliced video is processed based on the method for generating the spliced video cover in the above embodiment, so as to obtain a new spliced video cover.
According to the method for generating the spliced video cover, a plurality of target video materials are determined from a plurality of video materials selected by a user, at least one target image frame is selected from each target video material, and finally, the selected target image frames are subjected to image splicing processing to obtain the spliced video cover. Therefore, the generated spliced video cover contains the information of the plurality of video materials, and the information of the expressed spliced video is large, so that the watching user can roughly know the content of the spliced video by watching the spliced video cover.
Based on the same technical concept, an embodiment of the present application further provides an apparatus for generating a stitched video cover, where the apparatus may be a terminal or a server in the foregoing embodiment, as shown in fig. 2, and the apparatus includes:
an obtaining module 201, configured to obtain multiple video materials selected by a user;
a determining module 202, configured to determine a plurality of target video materials among a plurality of video materials;
a selecting module 203, configured to select at least one target image frame from each target video material;
and the splicing module 204 is configured to perform picture splicing processing on the selected multiple target image frames to obtain a spliced video cover.
In one possible implementation, the determining module 202 is configured to:
determining the first N video materials selected by a user from the plurality of video materials as a plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the determining module 202 is configured to:
and determining the first N video materials with the highest playing quantity in the plurality of video materials as a plurality of target video materials, wherein N is a set positive integer.
In a possible implementation manner, the selecting module 203 is configured to:
and selecting a cover page of each target video material as a target image frame.
In one possible implementation, the splicing module 204 is configured to:
and carrying out picture splicing processing on the plurality of target image frames according to the selection sequence of the video materials corresponding to the plurality of target image frames to obtain spliced video covers.
In one possible implementation, the apparatus further includes a replacement module configured to:
when detecting that the video material corresponding to the first target image frame in the spliced video cover is deleted, selecting a second target image frame from the video materials except the plurality of target video materials;
and replacing the first target image frame in the spliced video cover with the second target image frame.
In one possible implementation, the apparatus further includes a video stitching module configured to:
and carrying out video splicing processing on the plurality of video materials to obtain a spliced video.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that: the apparatus for generating a stitched video cover according to the above embodiment is illustrated by only dividing the functional modules when generating the stitched video cover, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus for generating a stitched video cover and the method for generating a stitched video cover provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The embodiment of the application also provides computer equipment, and the computer equipment can be a terminal. Fig. 3 is a block diagram of a terminal according to an embodiment of the present disclosure. The terminal 300 may be a portable mobile terminal such as: smart phones, tablet computers, smart cameras. The terminal 300 may also be referred to by other names such as user equipment, portable terminal, etc.
Generally, the terminal 300 includes: a processor 301 and a memory 302.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the method of stitched video cover generation provided herein.
In some embodiments, the terminal 300 may further include: a peripheral interface 303 and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, display screen 305, camera assembly 306, audio circuitry 307, positioning assembly 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the peripheral interface 303 may be implemented on a separate chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The display screen 305 also has the ability to capture touch signals on or over the surface of the touch display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. The display screen 305 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 305 may be one, providing the front panel of the terminal 300; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the terminal 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 300. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 306 is used to capture images or video. Optionally, camera assembly 306 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 307 is used to provide an audio interface between the user and terminal 300. Audio circuitry 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing or inputting the electric signals to the radio frequency circuit 304 to realize voice communication. The microphones may be provided in plural numbers, respectively, at different portions of the terminal 300 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuitry 304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 307 may also include a headphone jack.
The positioning component 308 is used to locate the current geographic location of the terminal 300 to implement navigation or LBS (location based Service). The positioning component 308 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 309 is used to supply power to the various components in the terminal 300. The power source 309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 309 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 300 also includes one or more sensors 310. The one or more sensors 310 include, but are not limited to: acceleration sensor 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314, optical sensor 315, and proximity sensor 316.
The acceleration sensor 311 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 300. For example, the acceleration sensor 311 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 301 may control the display screen 305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 311. The acceleration sensor 311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 312 may detect a body direction and a rotation angle of the terminal 300, and the gyro sensor 312 may cooperate with the acceleration sensor 311 to acquire a 3D motion of the user on the terminal 300. The processor 301 may implement the following functions according to the data collected by the gyro sensor 312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 313 may be disposed on a side bezel of the terminal 300 and/or on a lower layer of the display screen 305. When the pressure sensor 313 is disposed at the side frame of the terminal 300, a user's grip signal of the terminal 300 can be detected, and left-right hand recognition or shortcut operation can be performed according to the grip signal. When the pressure sensor 313 is disposed at the lower layer of the display screen 305, the operability control on the UI interface can be controlled according to the pressure operation of the user on the display screen 305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 314 is used for collecting a fingerprint of a user to identify the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, processor 301 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 314 may be disposed on the front, back, or side of the terminal 300. When a physical button or a vendor Logo is provided on the terminal 300, the fingerprint sensor 314 may be integrated with the physical button or the vendor Logo.
The optical sensor 315 is used to collect the ambient light intensity. In one embodiment, the processor 301 may control the display brightness of the display screen 305 based on the ambient light intensity collected by the optical sensor 315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 305 is increased; when the ambient light intensity is low, the display brightness of the display screen 305 is reduced. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera head assembly 306 according to the ambient light intensity collected by the optical sensor 315.
A proximity sensor 316, also known as a distance sensor, is typically provided on the front face of the terminal 300. The proximity sensor 316 is used to collect the distance between the user and the front surface of the terminal 300. In one embodiment, when the proximity sensor 316 detects that the distance between the user and the front surface of the terminal 300 gradually decreases, the processor 301 controls the display screen 305 to switch from the bright screen state to the dark screen state; when the proximity sensor 316 detects that the distance between the user and the front surface of the terminal 300 is gradually increased, the display screen 305 is controlled by the processor 301 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 3 is not intended to be limiting of terminal 300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the application also provides computer equipment, and the computer equipment can be a server. Fig. 4 is a schematic structural diagram of a server 400 according to an embodiment of the present application, where the server 400 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 401 and one or more memories 402, where the memory 402 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 401 to implement the method for generating a stitched video cover.
In an exemplary embodiment, a computer-readable storage medium is further provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the method for generating a stitched video cover in the above-mentioned embodiment. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of stitched video cover generation, the method comprising:
acquiring a plurality of video materials selected by a user;
determining a plurality of target video materials from the plurality of video materials;
selecting at least one target image frame from each target video material;
and carrying out picture splicing processing on the selected multiple target image frames to obtain spliced video covers.
2. The method of claim 1, wherein determining a plurality of target video materials among the plurality of video materials comprises:
and determining the first N video materials selected by the user from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
3. The method of claim 1, wherein determining a plurality of target video materials among the plurality of video materials comprises:
and determining the first N video materials with the highest playing quantity in the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
4. The method of claim 1, wherein said selecting at least one target image frame in each target video material comprises:
and selecting a cover page of each target video material as a target image frame.
5. The method of claim 1, wherein the step of performing picture stitching on the selected target image frames to obtain a stitched video cover comprises:
and according to the selection sequence of the video materials corresponding to the target image frames, carrying out image splicing processing on the target image frames to obtain the spliced video cover.
6. The method according to any one of claims 1 to 5, wherein after the processing of picture stitching the selected plurality of target image frames to obtain the stitched video cover, the method further comprises:
when detecting that the video material corresponding to the first target image frame in the spliced video cover is deleted, selecting a second target image frame from the video materials except the plurality of target video materials;
replacing the first target image frame in the stitched video cover with the second target image frame.
7. The method of claim 1, wherein after obtaining the plurality of video materials selected by the user, further comprising:
and carrying out video splicing processing on the plurality of video materials to obtain a spliced video.
8. An apparatus for stitched video cover generation, the apparatus comprising:
the acquisition module is used for acquiring a plurality of video materials selected by a user;
a determining module for determining a plurality of target video materials among the plurality of video materials;
the selecting module is used for selecting at least one target image frame from each target video material;
and the splicing module is used for carrying out picture splicing processing on the selected multiple target image frames to obtain spliced video covers.
9. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the method of stitched video cover generation of any one of claims 1-7.
10. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement the method of stitched video cover generation of any one of claims 1-7.
CN202010091783.4A 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers Active CN111369434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010091783.4A CN111369434B (en) 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010091783.4A CN111369434B (en) 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers

Publications (2)

Publication Number Publication Date
CN111369434A true CN111369434A (en) 2020-07-03
CN111369434B CN111369434B (en) 2023-08-25

Family

ID=71208043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010091783.4A Active CN111369434B (en) 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers

Country Status (1)

Country Link
CN (1) CN111369434B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023217194A1 (en) * 2022-05-13 2023-11-16 阿里巴巴(中国)有限公司 Dynamic video cover generation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244024A (en) * 2014-09-26 2014-12-24 北京金山安全软件有限公司 Video cover generation method and device and terminal
CN108650524A (en) * 2018-05-23 2018-10-12 腾讯科技(深圳)有限公司 Video cover generation method, device, computer equipment and storage medium
CN110213672A (en) * 2019-07-04 2019-09-06 腾讯科技(深圳)有限公司 Video generation, playback method, system, device, storage medium and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244024A (en) * 2014-09-26 2014-12-24 北京金山安全软件有限公司 Video cover generation method and device and terminal
CN108650524A (en) * 2018-05-23 2018-10-12 腾讯科技(深圳)有限公司 Video cover generation method, device, computer equipment and storage medium
CN110213672A (en) * 2019-07-04 2019-09-06 腾讯科技(深圳)有限公司 Video generation, playback method, system, device, storage medium and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023217194A1 (en) * 2022-05-13 2023-11-16 阿里巴巴(中国)有限公司 Dynamic video cover generation method

Also Published As

Publication number Publication date
CN111369434B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN108401124B (en) Video recording method and device
CN108063981B (en) Method and device for setting attributes of live broadcast room
CN110278464B (en) Method and device for displaying list
CN109168073B (en) Method and device for displaying cover of live broadcast room
CN108965922B (en) Video cover generation method and device and storage medium
CN110992493A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110740340B (en) Video live broadcast method and device and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110225390B (en) Video preview method, device, terminal and computer readable storage medium
CN110288689B (en) Method and device for rendering electronic map
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN112104648A (en) Data processing method, device, terminal, server and storage medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN112565806A (en) Virtual gift presenting method, device, computer equipment and medium
CN109660876B (en) Method and device for displaying list
CN109783176B (en) Page switching method and device
CN110769120A (en) Method, device, equipment and storage medium for message reminding
CN113160031A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN112118353A (en) Information display method, device, terminal and computer readable storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN111464829B (en) Method, device and equipment for switching media data and storage medium
CN111064657B (en) Method, device and system for grouping concerned accounts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant