CN111369434B - Method, device, equipment and storage medium for generating spliced video covers - Google Patents

Method, device, equipment and storage medium for generating spliced video covers Download PDF

Info

Publication number
CN111369434B
CN111369434B CN202010091783.4A CN202010091783A CN111369434B CN 111369434 B CN111369434 B CN 111369434B CN 202010091783 A CN202010091783 A CN 202010091783A CN 111369434 B CN111369434 B CN 111369434B
Authority
CN
China
Prior art keywords
video
target image
target
video materials
materials
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010091783.4A
Other languages
Chinese (zh)
Other versions
CN111369434A (en
Inventor
罗超
高杰杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202010091783.4A priority Critical patent/CN111369434B/en
Publication of CN111369434A publication Critical patent/CN111369434A/en
Application granted granted Critical
Publication of CN111369434B publication Critical patent/CN111369434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for generating a spliced video cover, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a plurality of video materials selected by a user; determining a plurality of target video materials in the plurality of video materials; selecting at least one target image frame from each target video material; and performing picture splicing processing on the selected target image frames to obtain a spliced video cover. By adopting the method provided by the application, the generated spliced video covers can express more information of the spliced video, and a user can roughly know the content of the spliced video by watching the spliced video covers.

Description

Method, device, equipment and storage medium for generating spliced video covers
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating a spliced video cover.
Background
Today, audio and video type software is becoming more and more functional. For example, some audio/video software has a function of creating a spliced video. When the spliced video is produced, a user firstly selects a plurality of video materials, and then clicks a splicing option to generate the spliced video. And, the user also needs to upload a picture as the spliced video cover.
In the related art, a picture uploaded by a user is used as a method for splicing video covers, and the determined information of the spliced video expressed by the spliced video covers is very limited. It is not well known to a viewing user to approximate the content of a spliced video by viewing the spliced video cover.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for generating a spliced video cover, which can solve the technical problems in the related technology. The technical scheme of the method, the device, the equipment and the storage medium for generating the spliced video cover is as follows:
in a first aspect, a method for generating a stitched video cover is provided, the method comprising:
acquiring a plurality of video materials selected by a user;
determining a plurality of target video materials in the plurality of video materials;
selecting at least one target image frame from each target video material;
and performing picture splicing processing on the selected target image frames to obtain a spliced video cover.
In one possible implementation, the determining a plurality of target video materials from the plurality of video materials includes:
and determining the first N video materials selected by the user from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the determining a plurality of target video materials from the plurality of video materials includes:
and determining the first N video materials with the highest playing quantity from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the selecting at least one target image frame in each target video material includes:
and selecting the cover of each target video material as a target image frame.
In one possible implementation manner, the performing a picture stitching process on the selected plurality of target image frames to obtain a stitched video cover includes:
and performing picture splicing processing on the target image frames according to the selection sequence of the video materials corresponding to the target image frames to obtain the spliced video cover.
In one possible implementation manner, after performing the image stitching process on the selected plurality of target image frames to obtain the stitched video cover, the method further includes:
selecting a second target image frame from video materials except the target video materials when the video materials corresponding to the first target image frame in the spliced video cover are detected to be deleted;
the first target image frame in the stitched video cover is replaced with the second target image frame.
In one possible implementation manner, after the obtaining the plurality of video materials selected by the user, the method further includes:
and performing video splicing processing on the plurality of video materials to obtain spliced video.
In a second aspect, there is provided an apparatus for stitched video cover generation, the apparatus comprising:
the acquisition module is used for acquiring a plurality of video materials selected by a user;
a determining module, configured to determine a plurality of target video materials from the plurality of video materials;
the selecting module is used for selecting at least one target image frame from each target video material;
and the splicing module is used for carrying out picture splicing processing on the selected target image frames to obtain a spliced video cover.
In one possible implementation manner, the determining module is configured to:
and determining the first N video materials selected by the user from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation manner, the determining module is configured to:
and determining the first N video materials with the highest playing quantity from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
In one possible implementation manner, the selecting module is configured to:
and selecting the cover of each target video material as a target image frame.
In one possible implementation manner, the splicing module is configured to:
and performing picture splicing processing on the target image frames according to the selection sequence of the video materials corresponding to the target image frames to obtain the spliced video cover.
In a possible implementation manner, the apparatus further includes a replacing module, configured to:
selecting a second target image frame from video materials except the target video materials when the video materials corresponding to the first target image frame in the spliced video cover are detected to be deleted;
the first target image frame in the stitched video cover is replaced with the second target image frame.
In one possible implementation manner, the apparatus further includes a video stitching module configured to:
and performing video splicing processing on the plurality of video materials to obtain spliced video.
In a third aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the method of stitched video cover generation of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the method of stitched video cover generation of the first aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
according to the method for generating the spliced video cover, the plurality of target video materials are determined from the plurality of video materials selected by the user, at least one target image frame is selected from each target video material, and finally, the selected plurality of target image frames are subjected to picture splicing processing to obtain the spliced video cover. Therefore, the generated spliced video cover contains information of a plurality of video materials, and the expressed spliced video has more information quantity, so that a watching user can roughly know the content of the spliced video by watching the spliced video cover.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for generating a stitched video cover according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an apparatus for generating a stitched video cover according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a spliced video according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a stitched video cover according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another exemplary stitched video cover according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The embodiment of the application provides a method for generating a spliced video cover, which can be realized by a terminal or a server. The terminal can be a mobile terminal such as a mobile phone, a tablet computer, a notebook computer and the like, or a fixed terminal such as a desktop computer and the like. The server may be a single server or a cluster of servers.
As shown in fig. 1, the process flow of the method for generating a stitched video cover may include the following steps:
in step 101, a plurality of video materials selected by a user are acquired.
In an implementation, when a user wants to make a stitched video, an application that can make the stitched video can be opened on the terminal. After the application program is opened, corresponding operation can be performed, so that the terminal displays the video material library, and a user can select a plurality of video materials in the video material library. In addition, the minimum number of video materials selected by the user can be set, and when the number of the video materials selected by the user is smaller than the minimum number of the video materials, the user is prompted to continue to select the video materials.
In one possible implementation manner, the video material library is a local video material library, and when the method provided by the embodiment of the present application is executed by a server (the server may be a background server of an application program), the terminal may send the video material selected by the user in the local video material library to the server. When the method provided by the embodiment of the application is executed by the terminal, the terminal can directly acquire a plurality of video materials from the local video material library.
In another possible implementation manner, the video material library is an online video material library, and when the method provided by the embodiment of the application is executed by the server, the terminal can send the identifier of the video material selected by the user from the online video material library to the server, and the server can acquire the corresponding video material from the online video material library based on the received identifier. When the method provided by the embodiment of the application is executed by the terminal, the terminal can acquire the video material selected by the user from the online video material library.
It should be noted that, after a plurality of video materials selected by a user are acquired, video stitching processing may be performed to obtain a stitched video. And when in splicing, video splicing processing can be carried out according to the selection sequence of the video materials to obtain spliced video.
As shown in fig. 5, the video stitching process may be performed by stitching the video material N according to the selection order of the video materials, where the video material N indicates that the video material is selected N.
In step 102, a plurality of target video materials are determined from a plurality of video materials.
In an implementation, after a plurality of video materials selected by a user are acquired, a plurality of target video materials can be determined in the plurality of video materials, and a spliced video cover is generated by using image frames in the target video materials.
The rules for determining the plurality of target video materials among the plurality of video materials may be as follows:
in one possible implementation, the first N video materials selected by the user are determined among the plurality of video materials as the plurality of target video materials.
Wherein N is a set positive integer, and N should be less than the minimum number of video materials. For example, N may be 4.
In practice, the user selects the video material according to a certain selection order. When determining the target video material, selecting the video material with the sequence of the first N from the plurality of video materials, and determining the video material as the target video material.
For example, if the user selects 10 video materials, the first, second, third, and fourth selected video materials may be determined as target video materials.
In another possible implementation, the top N video materials with the highest play amount are determined as the plurality of target video materials from the plurality of video materials.
In an implementation, if the video material is selected from the online video material library, the online video material library also corresponds to the play amount of the stored video material. The top N (e.g., top 4) video materials with the highest play out of the plurality of video materials may be determined as the target video material.
The first N video materials with the highest playing amount are determined to be target video materials, so that the finally obtained spliced video cover contains the information of the target video materials, thereby improving the attraction of the spliced video and increasing the playing amount of the spliced video.
In another possible implementation, the N video materials are uniformly selected as the plurality of target video materials according to a selection order among the plurality of video materials.
In implementation, the target video material can be selected uniformly in the multiple video materials according to the selected sequence. For example, when the user selects 10 video materials, N is 4, it may be determined that the video materials selected in the order of 1, 4, 7, and 10 are the target video materials.
In another possible implementation, N video materials are randomly selected from the plurality of video materials as the target video material.
In another possible implementation, the target video material may also be determined by the user, that is, the user may mark the target video material when selecting the video material, and the terminal or the server may determine the target video material according to the mark in the video material.
At step 103, at least one target image frame is selected in each target video material.
Wherein, a target image frame can be selected from each target video material, and the target image frame can be any image frame of the target video material, for example, can be a cover of the target video material.
In implementations, the cover of each target video material may be taken as a target image frame. Because the covers of the video materials are generally selected image frames which are more suitable for being used as the covers, and other users can know the video materials in the video material library through the covers, the covers of a plurality of video materials in the spliced video are selected as target image frames, and the information of the video materials contained in the spliced video can be expressed more clearly.
In step 104, the selected plurality of target image frames are subjected to image stitching processing, so as to obtain a stitched video cover.
In implementation, when performing the image stitching process on the plurality of target image frames, the plurality of target image frames may be stitched along a horizontal direction (as shown in fig. 6 and 7), the plurality of target image frames may be stitched along a vertical direction, or the plurality of target image frames may be stitched in a manner of multiple rows and multiple columns (for example, two rows and two columns), which is not limited in the embodiment of the present application.
When a plurality of target image frames are subjected to splicing, it is necessary to perform splicing in a certain splicing order, particularly in a manner of performing splicing in the horizontal direction and the vertical direction. The principle of determining the splicing order can be as follows:
in one possible implementation manner, according to the selection sequence of the video materials corresponding to the target image frames, the image stitching process is performed on the target image frames, so as to obtain the stitched video cover.
The selection sequence of the video materials corresponding to the target image frames, namely the sequence of selecting the video materials by the user.
In practice, as shown in fig. 6, video material 1 represents the video material that was selected by user 1 st. The selected target video materials are the first 4 video materials selected by the user, the target image frames selected in the target video materials are the covers of the video materials, the covers of the video materials are spliced along the horizontal direction, and the selection sequence of the video materials corresponding to the covers is gradually increased along the sequence from left to right.
In another possible implementation manner, according to the order of the playing amounts of the video materials corresponding to the target image frames, the image stitching process is performed on the target image frames, so as to obtain the stitched video cover.
In the implementation, the image stitching process may be performed on the plurality of target image frames according to the order of the playing amounts of the video materials corresponding to the target image frames.
For example, each target image frame is subjected to the stitching processing in the horizontal direction, and the play amount of the video material corresponding to each target image frame gradually decreases in the order from left to right.
In another possible implementation, the image stitching process may also be performed on the plurality of target image frames in a random order.
In addition, after the generation of the spliced video cover, if a certain video material in the video material library is deleted, the corresponding target image frame in the spliced video cover may not be displayed normally, in one possible implementation, when the video material corresponding to the first target image frame in the spliced video cover is detected to be deleted, a second target image frame is selected from the video materials except for the plurality of target video materials. The first target image frame in the stitched video cover is replaced with the second target image frame. That is, if a certain target video material is deleted, one target video material is selected again from the non-target video materials, and a target image frame is selected from the target video material for replacing the first target image frame in the spliced video cover.
In another possible implementation manner, when it is detected that the video material corresponding to the first target image frame in the spliced video cover is deleted, a set picture is acquired, and the first target image frame in the spliced video cover is replaced by the set picture. That is, if some target video material is deleted, the first target image frame in the stitched video cover is replaced with a default set picture.
It should be noted that if a certain video material is deleted, corresponding processing is also required to be performed on the spliced video, specifically, each video material may be subjected to splicing processing again, so as to obtain a new spliced video. Alternatively, the portion of the video material that is deleted may be replaced with a default piece of video. Then, the new spliced video is processed based on the method for generating the spliced video cover in the embodiment, so as to obtain the new spliced video cover.
According to the method for generating the spliced video cover, the plurality of target video materials are determined from the plurality of video materials selected by the user, at least one target image frame is selected from each target video material, and finally, the selected plurality of target image frames are subjected to picture splicing processing to obtain the spliced video cover. Thus, the generated spliced video cover contains information of a plurality of video materials, and the expressed spliced video is large in information, so that a watching user can roughly know the content of the spliced video by watching the spliced video cover.
Based on the same technical concept, the embodiment of the present application further provides a device for generating a spliced video cover, where the device may be a terminal or a server in the above embodiment, as shown in fig. 2, and the device includes:
an obtaining module 201, configured to obtain a plurality of video materials selected by a user;
a determining module 202, configured to determine a plurality of target video materials from a plurality of video materials;
a selecting module 203, configured to select at least one target image frame from each target video material;
and the splicing module 204 is used for carrying out picture splicing processing on the selected target image frames to obtain a spliced video cover.
In one possible implementation, the determining module 202 is configured to:
and determining the first N video materials selected by the user from the plurality of video materials as a plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the determining module 202 is configured to:
and determining the first N video materials with the highest playing quantity from the plurality of video materials as a plurality of target video materials, wherein N is a set positive integer.
In one possible implementation, the selecting module 203 is configured to:
and selecting the cover of each target video material as a target image frame.
In one possible implementation, the stitching module 204 is configured to:
and performing picture splicing processing on the target image frames according to the selection sequence of the video materials corresponding to the target image frames to obtain a spliced video cover.
In one possible implementation, the apparatus further includes a replacement module configured to:
when the video materials corresponding to the first target image frames in the spliced video cover are detected to be deleted, selecting second target image frames from the video materials except the target video materials;
the first target image frame in the stitched video cover is replaced with the second target image frame.
In one possible implementation, the apparatus further includes a video stitching module configured to:
and performing video splicing processing on the plurality of video materials to obtain spliced video.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
It should be noted that: in the apparatus for generating a spliced video cover provided in the foregoing embodiment, when the spliced video cover is generated, only the division of the foregoing functional modules is used for illustrating, in practical application, the foregoing functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus for generating the spliced video cover provided in the foregoing embodiment belongs to the same concept as the method embodiment for generating the spliced video cover, and detailed implementation processes of the apparatus are shown in the method embodiment, which is not repeated herein.
The embodiment of the application also provides computer equipment, which can be a terminal. Fig. 3 is a block diagram of a terminal according to an embodiment of the present application. The terminal 300 may be a portable mobile terminal such as: smart phone, tablet computer, smart camera. The terminal 300 may also be referred to by other names of user equipment, portable terminals, etc.
In general, the terminal 300 includes: a processor 301 and a memory 302.
Processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the method of stitched video cover generation provided in the present application.
In some embodiments, the terminal 300 may further optionally include: a peripheral interface 303, and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, display 305, camera assembly 306, audio circuitry 307, positioning assembly 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 301, the memory 302, and the peripheral interface 303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 304 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 304 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 304 may also include NFC (Near Field Communication ) related circuitry, which is not limiting of the application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The display screen 305 also has the ability to collect touch signals at or above the surface of the touch display screen 305. The touch signal may be input as a control signal to the processor 301 for processing. The display 305 is used to provide virtual buttons and/or virtual keyboards, also known as soft buttons and/or soft keyboards. In some embodiments, the display 305 may be one, providing a front panel of the terminal 300; in other embodiments, the display screen 305 may be at least two, respectively disposed on different surfaces of the terminal 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or a folded surface of the terminal 300. Even more, the display screen 305 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 306 is used to capture images or video. Optionally, the camera assembly 306 includes a front camera and a rear camera. In general, a front camera is used for realizing video call or self-photographing, and a rear camera is used for realizing photographing of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and the rear cameras are any one of a main camera, a depth camera and a wide-angle camera, so as to realize fusion of the main camera and the depth camera to realize a background blurring function, and fusion of the main camera and the wide-angle camera to realize a panoramic shooting function and a Virtual Reality (VR) shooting function. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 307 is used to provide an audio interface between the user and terminal 300. The audio circuit 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing, or inputting the electric signals to the radio frequency circuit 304 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 300. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuit 304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 307 may also include a headphone jack.
The location component 308 is used to locate the current geographic location of the terminal 300 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 308 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
The power supply 309 is used to power the various components in the terminal 300. The power source 309 may be alternating current, direct current, disposable or rechargeable. When the power source 309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 300 further includes one or more sensors 310. The one or more sensors 310 include, but are not limited to: acceleration sensor 311, gyroscope sensor 312, pressure sensor 313, fingerprint sensor 314, optical sensor 315, and proximity sensor 316.
The acceleration sensor 311 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 300. For example, the acceleration sensor 311 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 301 may control the display screen 305 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 311. The acceleration sensor 311 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 312 may detect the body direction and the rotation angle of the terminal 300, and the gyro sensor 312 may collect the 3D motion of the user to the terminal 300 in cooperation with the acceleration sensor 311. The processor 301 may implement the following functions according to the data collected by the gyro sensor 312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 313 may be disposed at a side frame of the terminal 300 and/or at a lower layer of the display 305. When the pressure sensor 313 is provided at the side frame of the terminal 300, a grip signal of the terminal 300 by a user may be detected, and left-right hand recognition or shortcut operation may be performed according to the grip signal. When the pressure sensor 313 is disposed at the lower layer of the display screen 305, control of the operability control on the UI interface can be achieved according to the pressure operation of the user on the display screen 305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 314 is used to collect a fingerprint of a user to identify the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 301 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 314 may be provided on the front, back or side of the terminal 300. When a physical key or a manufacturer Logo is provided on the terminal 300, the fingerprint sensor 314 may be integrated with the physical key or the manufacturer Logo.
The optical sensor 315 is used to collect the ambient light intensity. In one embodiment, processor 301 may control the display brightness of display screen 305 based on the intensity of ambient light collected by optical sensor 315. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 305 is turned up; when the ambient light intensity is low, the display brightness of the display screen 305 is turned down. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera assembly 306 according to the ambient light intensity collected by the optical sensor 315.
A proximity sensor 316, also referred to as a distance sensor, is typically disposed on the front face of the terminal 300. The proximity sensor 316 is used to collect the distance between the user and the front of the terminal 300. In one embodiment, when the proximity sensor 316 detects a gradual decrease in the distance between the user and the front of the terminal 300, the processor 301 controls the display 305 to switch from the bright screen state to the off screen state; when the proximity sensor 316 detects that the distance between the user and the front surface of the terminal 300 gradually increases, the processor 301 controls the display screen 305 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 3 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The embodiment of the application also provides computer equipment, which can be a server. Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 400 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 401 and one or more memories 402, where at least one instruction is stored in the memories 402, and the at least one instruction is loaded and executed by the processor 401 to implement the method for generating a stitched video cover.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the method of stitched video cover generation in the above embodiments. For example, the computer readable storage medium may be ROM (Read-Only Memory), random access Memory (Random Access Memory, RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (8)

1. A method of stitched video cover generation, the method comprising:
acquiring a plurality of video materials selected by a user;
determining a plurality of target video materials in the plurality of video materials;
selecting a cover of each target video material as a target image frame;
performing picture splicing processing on the selected target image frames to obtain a spliced video cover;
selecting a second target image frame from video materials except the plurality of target video materials under the condition that the video materials corresponding to the first target image frame in the spliced video cover are deleted, wherein the second target image frame is a set picture or is a target image frame selected from the re-selected target video materials;
the first target image frame in the stitched video cover is replaced with the second target image frame.
2. The method of claim 1, wherein said determining a plurality of target video materials from said plurality of video materials comprises:
and determining the first N video materials selected by the user from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
3. The method of claim 1, wherein said determining a plurality of target video materials from said plurality of video materials comprises:
and determining the first N video materials with the highest playing quantity from the plurality of video materials as the plurality of target video materials, wherein N is a set positive integer.
4. The method of claim 1, wherein the performing a picture stitching process on the selected plurality of target image frames to obtain a stitched video cover comprises:
and performing picture splicing processing on the target image frames according to the selection sequence of the video materials corresponding to the target image frames to obtain the spliced video cover.
5. The method of claim 1, wherein after the obtaining the plurality of video materials selected by the user, further comprising:
and performing video splicing processing on the plurality of video materials to obtain spliced video.
6. An apparatus for generating a stitched video cover, the apparatus comprising:
the acquisition module is used for acquiring a plurality of video materials selected by a user;
a determining module, configured to determine a plurality of target video materials from the plurality of video materials;
the selecting module is used for selecting the front cover of each target video material as a target image frame;
the splicing module is used for carrying out picture splicing processing on the selected target image frames to obtain a spliced video cover;
the replacing module is used for selecting a second target image frame from the video materials except the plurality of target video materials under the condition that the video materials corresponding to the first target image frame in the spliced video cover are detected to be deleted, wherein the second target image frame is a set picture, or the second target image frame is a target image frame selected from the reselected target video materials; the first target image frame in the stitched video cover is replaced with the second target image frame.
7. A computer device comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement the method of stitched video cover generation of any of claims 1-5.
8. A computer-readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the method of stitched video cover generation of any of claims 1-5.
CN202010091783.4A 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers Active CN111369434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010091783.4A CN111369434B (en) 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010091783.4A CN111369434B (en) 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers

Publications (2)

Publication Number Publication Date
CN111369434A CN111369434A (en) 2020-07-03
CN111369434B true CN111369434B (en) 2023-08-25

Family

ID=71208043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010091783.4A Active CN111369434B (en) 2020-02-13 2020-02-13 Method, device, equipment and storage medium for generating spliced video covers

Country Status (1)

Country Link
CN (1) CN111369434B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114996553A (en) * 2022-05-13 2022-09-02 阿里巴巴(中国)有限公司 Dynamic video cover generation method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244024B (en) * 2014-09-26 2018-05-08 北京金山安全软件有限公司 Video cover generation method and device and terminal
CN108650524B (en) * 2018-05-23 2022-08-16 腾讯科技(深圳)有限公司 Video cover generation method and device, computer equipment and storage medium
CN110213672B (en) * 2019-07-04 2021-06-18 腾讯科技(深圳)有限公司 Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment

Also Published As

Publication number Publication date
CN111369434A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN110278464B (en) Method and device for displaying list
CN108965922B (en) Video cover generation method and device and storage medium
CN110740340B (en) Video live broadcast method and device and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN110288689B (en) Method and device for rendering electronic map
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN109660876B (en) Method and device for displaying list
CN109783176B (en) Page switching method and device
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN113032590B (en) Special effect display method, device, computer equipment and computer readable storage medium
CN110769120A (en) Method, device, equipment and storage medium for message reminding
CN111064657B (en) Method, device and system for grouping concerned accounts
CN110992268B (en) Background setting method, device, terminal and storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN112118353A (en) Information display method, device, terminal and computer readable storage medium
CN111818358A (en) Audio file playing method and device, terminal and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN111464829B (en) Method, device and equipment for switching media data and storage medium
CN109618018B (en) User head portrait display method, device, terminal, server and storage medium
CN109275015B (en) Method, device and storage medium for displaying virtual article
CN108881715B (en) Starting method and device of shooting mode, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant