CN110650375A - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN110650375A
CN110650375A CN201910995388.6A CN201910995388A CN110650375A CN 110650375 A CN110650375 A CN 110650375A CN 201910995388 A CN201910995388 A CN 201910995388A CN 110650375 A CN110650375 A CN 110650375A
Authority
CN
China
Prior art keywords
video
time point
video clip
clip
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910995388.6A
Other languages
Chinese (zh)
Inventor
邵和明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910995388.6A priority Critical patent/CN110650375A/en
Publication of CN110650375A publication Critical patent/CN110650375A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47214End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for content reservation or setting reminders; for requesting event notification, e.g. of sport results or stock market
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Databases & Information Systems (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Technology Law (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a video processing method, a video processing device, video processing equipment and a storage medium, wherein the method comprises the following steps: receiving a video collection instruction; responding to the collection instruction, and determining a starting time point and an ending time point corresponding to the video clips to be collected; wherein the duration of the video clips to be collected is less than the total duration of the video; acquiring the video clip according to the starting time point and the ending time point; and saving the video clip. By the method and the device, follow-up searching of the video clips concerned by the user can be facilitated, the efficiency of video collection and video searching of the user is greatly improved, and the user experience is improved.

Description

Video processing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, and relates to but is not limited to a video processing method, a video processing device, video processing equipment and a storage medium.
Background
When a user views a video using video playing software or views a video in a web page, it is sometimes desirable to view the video at a later time or to quickly find the video at a later time, so that the video can be collected for later searching of the video.
At present, when a video is collected, a collection button is usually arranged at the bottom of the video, and a user can collect the whole video into a collection list by clicking the collection button.
However, for the collected video, if the user wants to search a specific segment in the video, the user needs to find the video first and then drag the progress bar to find the video segment that the user wants to search, which obviously takes time and is inconvenient to operate, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device and a storage medium, which can collect video clips concerned by users in videos, facilitate subsequent searching of the video clips, and greatly improve user experience of the users in video collection and video retrieval.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides a video processing method, including:
receiving a video collection instruction;
determining, in response to the collection instruction, that the video segments to be collected correspond to a start time point and an end time point; wherein the duration of the video clips to be collected is less than the total duration of the video;
acquiring the video clip according to the starting time point and the ending time point; and
and saving the video clip.
An embodiment of the present application provides a video processing method, including:
receiving a search request, wherein the search request comprises search information;
acquiring a label of a video clip matched with the search information in a preset storage unit;
acquiring and playing the video clip according to the label of the video clip;
wherein a video duration between a start time point of the video segment and an end time point of the video segment is less than a total duration of videos forming the video segment.
An embodiment of the present application provides a video processing apparatus, including:
the first receiving module is used for receiving a video collection instruction;
the first receiving module is used for receiving a video collection instruction;
the response module is used for responding to the collection instruction and determining a starting time point and an ending time point corresponding to the video clips to be collected; wherein the duration of the video clips to be collected is less than the total duration of the video;
a first obtaining module, configured to obtain the video segment according to the start time point and the end time point;
and the storage module is used for storing the video clips.
An embodiment of the present application provides a video processing apparatus, including:
a second receiving module, configured to receive a search request, where the search request includes search information;
the second acquisition module is used for acquiring the label of the video clip matched with the search information in a preset storage unit;
the third acquisition module is used for acquiring and playing the video clip according to the label of the video clip;
wherein a video duration between a start time point of the video segment and an end time point of the video segment is less than a total duration of videos forming the video segment.
An embodiment of the present application provides a video processing apparatus, including:
a memory for storing executable instructions; and the processor is used for realizing the method when executing the executable instructions stored in the memory.
The embodiment of the application provides a storage medium, which stores executable instructions and is used for causing a processor to implement the method when executed.
The embodiment of the application has the following beneficial effects: and acquiring a video clip according to the determined starting time point and ending time point, wherein the video duration between the starting time point and the ending time point is less than the total duration of the video, so that when the video clip is collected, the content between the starting time point and the ending time point is collected, and thus, only the content or important content concerned by the user is collected, which is convenient for the follow-up search of the content or important content concerned by the user, greatly improves the efficiency of the user in video collection and video search, and improves the user experience.
Drawings
FIG. 1A is an interface diagram for video collection in the related art;
FIG. 1B is a diagram of an interface for finding a video clip in a video according to the related art;
fig. 2A is a schematic diagram of an alternative architecture of a video processing system according to an embodiment of the present application;
FIG. 2B is a block chain system of the video processing system according to an embodiment of the present disclosure;
FIG. 2C is an alternative block diagram according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
fig. 4 is an alternative flow chart of a video processing method provided by the embodiment of the present application;
fig. 5 is an alternative flow chart of a video processing method provided by the embodiment of the present application;
fig. 6 is an alternative flow chart of a video processing method provided by the embodiment of the present application;
fig. 7 is an alternative flow chart of a video processing method provided by the embodiment of the present application;
fig. 8 is an alternative flow chart of a video processing method provided by the embodiment of the present application;
fig. 9 is an alternative flow chart of a video processing method provided by the embodiment of the present application;
fig. 10A is an alternative interface diagram of a video processing method according to an embodiment of the present application;
fig. 10B is an alternative interface diagram of a video processing method according to an embodiment of the present application;
FIG. 10C is an interface diagram of a favorites list provided by an embodiment of the present application;
FIG. 10D is an interface diagram of an access play interface provided by an embodiment of the present application;
fig. 11 is an alternative flowchart of a video processing method according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
For the convenience of understanding the video processing method according to the embodiment of the present application, a video collection method and problems existing in the related art will be described first.
In the related art, when a video is collected, a collection button is usually arranged at the bottom of the video, and referring to fig. 1A, an interface diagram for collecting the video in the related art is shown, a collection button 11 is arranged at the bottom of a video displayed on a current video interface 10, and a user can collect the whole video into a list to be watched 12 by clicking the collection button 11. However, when the user wants to search a specific segment in the video, the user needs to find the video, as shown in fig. 1B, which is an interface diagram for finding a video segment in the video in the related art, and the user needs to drag the progress bar 13 to find the video segment he wants to find.
The video collection method in the related art has at least the following disadvantages: 1) if the user wants to save only a specific video segment of the video, this cannot be achieved by the method of the related art. 2) If the user wants to collect a certain video clip in the video and collects the whole video, the user needs to find the video first when finding the video clip next time, and then drag the progress bar a little bit to find the video clip which the user wants to find, which is time-consuming and inconvenient to operate. 3) The user needs to remember the original name of the video, and has no way to self-define the name of the video more suitable for self-memory, so that the user is inconvenient to search the video subsequently.
Based on at least one problem existing in the related art, the embodiment of the application provides a video processing method, by acquiring video segments according to the determined starting time point and ending time point, and the duration of the video segments between the starting time point and the ending time point is less than the total duration of the video, and by collecting the formed video segments, the subsequent user can conveniently search for concerned content or important content, the efficiency of the user in video collection and video search is greatly improved, and the user experience is improved.
An exemplary application of the video processing device provided in the embodiments of the present application is described below, and the device provided in the embodiments of the present application may be implemented as various intelligent terminals capable of playing videos, such as a notebook computer, a tablet computer, a desktop computer, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), a virtual reality device, an augmented reality device, an intelligent home appliance, and an intelligent home. In the following, an exemplary application will be explained when the device is implemented as a terminal.
Referring to fig. 2A, fig. 2A is a schematic diagram of an alternative architecture of the video processing system 20 according to the embodiment of the present application. To enable playing of a video on the terminal, the terminal 100 (the terminal 100-1 and the terminal 100-2 are exemplarily shown) is connected to the server 300 through the network 200, and the network 200 may be a wide area network or a local area network, or a combination of both.
The terminal 100 displays a video on a current page 110 (the current page 110-1 and the current page 110-2 are exemplarily shown). In this embodiment, the terminal may obtain a collection instruction input by the user on the current page, determine a start time point and an end time point in response to the collection instruction, determine a video clip to be collected, and send attribute information corresponding to the video clip to be collected to the server 300 through the network 200, so that the server 300 collects the video clip, returns a collection result to the terminal 100, and displays the collection result on the current page 110 of the terminal 100.
The video processing system 20 related to the embodiment of the present application may also be a distributed system 101 of a blockchain system, referring to fig. 2B, where fig. 2B is an optional structural schematic diagram of the video processing system 10 provided in the embodiment of the present application applied to a blockchain system, where the distributed system 101 may be a distributed node formed by a plurality of nodes 102 (any form of computing devices in an access network, such as servers and user terminals) and clients 103, a Peer-to-Peer (P2P, PeerTo Peer) network is formed between the nodes, and the P2P protocol is an application layer protocol operating on top of a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 2B, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the electronic money remaining in the electronic money address.
And 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
4) Consensus (Consensus), a process in a blockchain network, is used to agree on transactions in a block among a plurality of nodes involved, the agreed block is to be appended to the end of the blockchain, and the mechanisms for achieving Consensus include Proof of workload (PoW, Proof of Work), Proof of rights and interests (PoS, Proof of equity (DPoS), Proof of granted of shares (DPoS), Proof of Elapsed Time (PoET, Proof of Elapsed Time), and so on.
Referring to fig. 2C, fig. 2C is an optional schematic diagram of a Block Structure (Block Structure) provided in this embodiment, each Block includes a hash value of a transaction record (hash value of the Block) stored in the Block and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a terminal 100 according to an embodiment of the present application, where the terminal 100 shown in fig. 3 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in terminal 100 are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable communications among the components connected. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 3.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331, including one or more speakers and/or one or more visual display screens, that enable presentation of media content. The user interface 330 also includes one or more input devices 332, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310. The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 350 described in embodiments herein is intended to comprise any suitable type of memory. In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
an input processing module 353 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates a video processing apparatus 354 stored in the memory 350, where the video processing apparatus 354 may be a video processing apparatus in the terminal 100-1, and may be software in the form of programs and plug-ins, and the like, and includes the following software modules: the first receiving module 3541, the responding module 3542, the first obtaining module 3543, and the saving module 3544, which are logical and thus may be arbitrarily combined or further separated depending on the functionality implemented. The functions of the respective modules will be explained below.
In other embodiments, the video processing device 354 may also be a video processing device in the terminal 100-2, and it may also be software in the form of programs and plug-ins, and includes the following software modules: a second receiving module, a second obtaining module and a third obtaining module (not shown in the figure).
In other embodiments, the apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the video processing method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The video processing method provided by the embodiment of the present application will be described below in conjunction with an exemplary application and implementation of the terminal 100 provided by the embodiment of the present application. Referring to fig. 4, fig. 4 is an alternative flowchart of a video processing method provided in the embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
Step S401, the terminal receives a video collection instruction.
Here, the terminal 100 may receive the favorite instruction through a video client on the terminal, for example, the video client may be a client of a video playing APP. The terminal may be a node in a blockchain network. The collection instruction is used for instructing to collect the content in the video, and the collection instruction can be an instruction sent by a user through a client.
The video can be stored in a blockchain network, the video is a complete video pre-stored on the client, and the video can be any type of video.
In this embodiment of the application, the collection instruction may be received during playing of the video, or may be received during non-playing of the video, where the collection instruction includes an identifier of the video, and the identifier of the video includes, but is not limited to, at least one of: video name, video type, video author, video source, etc.
In some embodiments, the stowage instructions include, but are not limited to, at least one of: voice instructions, operation instructions, gesture instructions, and mouth shape instructions. For example, when the collection instruction is a voice instruction, the terminal may receive the voice instruction of "collect video" of the user; or, when the collection instruction is an operation instruction, the terminal may receive a selection operation of the user on the terminal to determine that the operation instruction of the user is the collection instruction; or when the collection instruction is a gesture instruction, the image acquisition device of the terminal can acquire gesture information of the user, and when the gesture information corresponds to a preset collection gesture, the gesture information of the user is determined to be the collection instruction; or when the collection instruction is a mouth shape instruction, the image acquisition device of the terminal can also acquire the mouth shape information of the user, and when the mouth shape information corresponds to a preset collection mouth shape, the mouth shape information of the user is determined to be the collection instruction.
Step S402, responding to the collection instruction, and determining a starting time point and an ending time point corresponding to the video clips to be collected.
Here, when the terminal receives the collection instruction, in response to the collection instruction, a start time point and an end time point corresponding to the video segments to be collected are determined, where the start time point and the end time point may both be located at any position in the entire progress of the video, and the start time point is located before the end time point.
In this embodiment of the present application, the start time point is a start collection time point of the video, the end time point is an end collection time point of the video, and when the collection of the video is performed, the video content between the start time point and the end time point is collected. Wherein a video duration between the start time point and the end time point is less than a total duration of the video.
Step S403, acquiring the video segment according to the start time point and the end time point.
Here, the video content between the start time point and the end time point is determined as a video clip to be collected, wherein the video clip is a part of the video content of the video and is a video clip composed of a plurality of frames of images in the video.
It should be noted that the video clips may be key content in the video, or the video clips are content that is focused by the user in the video. For example, the video may be an educational video, the video includes the explanation of a plurality of knowledge points, and the user may intercept a part of the video corresponding to a knowledge point that is not understood by the user or is considered to be important, to obtain the video segment related to the knowledge point.
And S404, saving the video clip.
Here, after the video segment is obtained by interception, the video segment may be saved for facilitating subsequent fast search of the video segment, and in this embodiment of the present application, the video segment may be stored in a blockchain network.
According to the video processing method provided by the embodiment of the application, the video clips are formed according to the starting time point and the ending time point, the duration of the video clips between the starting time point and the ending time point is less than the total duration of the video, so that the video content between the starting time point and the ending time point is collected when the video clips are collected, and therefore, only the content concerned by the user or the content considered to be important by the user is collected, the follow-up search of the content concerned by the user or the important content can be facilitated, the efficiency of the user in video collection and video search is greatly improved, and the user experience is improved.
Referring to fig. 5, fig. 5 is an alternative flow chart of a video processing method provided in the embodiment of the present application, where the method includes the following steps:
step S501, the terminal receives a video collection instruction input by a user.
Here, the collection instruction is used to instruct to start collecting the video. The stowage instructions include, but are not limited to, at least one of: a voice start instruction, an operation start instruction, a gesture start instruction, and a mouth shape start instruction.
Step S502, in the playing process of the video, the terminal determines the playing time point when the collection instruction is received as the starting time point.
Here, the start time point, that is, the playing time point corresponding to the collection instruction when the collection instruction is received, is determined as the start time point according to the collection instruction.
The embodiment of the application can correspond to the following scenes: here, taking the collection instruction as an operation start instruction as an example, the user plays a certain video a through the video APP, and an action trigger button of "collecting video clips" may be included on a playing interface of the video APP, and when the user clicks the trigger button, a time point played by the video a at the current time is determined as a start time point of video clip collection.
In step S503, after the start time point, an end time point corresponding to the start time point is determined.
Here, the end time point is located after the start time point, and an end time point is determined among the play time points of the video after the start time point is determined.
In this embodiment of the present application, the determining the ending time point in step S503 may be implemented in any one of the following two manners:
the first method is as follows: in step S5030, a collection termination instruction is received.
Step S5031, after the start time point, determining the play time point when the collection ending instruction is received as the end time point.
Here, after the collection instruction has been received and the start time point is determined, the play time point corresponding to when the collection end instruction is received is determined as the end time point corresponding to the start time point. The end stowage instructions include, but are not limited to, at least one of: a voice ending instruction, an operation ending instruction, a gesture ending instruction and a mouth shape ending instruction.
Here, the description will be given taking an example in which the end collection command is an operation end command. For example, after the operation start instruction is received and the start time point is determined, an "end" button may be displayed on the play interface of the video APP, and at this time, the video is still in a play state and continuously plays the video, when the play time point at which the user wants to end the collection is reached, the user may click the "end" button, and an action of clicking the "end" button at this time is the operation end instruction, and when the terminal receives the operation end instruction, the time point at which the video corresponding to the current time is played is determined as the end time point of the video clip collection.
The second method comprises the following steps: step S5032, determining the playing time point with a preset duration from the starting time point as the ending time point.
Here, the duration of the video segments may be fixed, that is, the duration of each video segment is a predetermined duration, and then, after the start time point is determined, the play time point, which is a predetermined duration from the start time point, is determined as the end time point.
In some embodiments, after the collection instruction has been received and the start time point is determined, the video may be in a pause playing state, pause a current interface to a video interface corresponding to the start time point, and display an end time point which is a preset time length away from the start time point; or, in other embodiments, after the collection instruction has been received and the start time point is determined, the video is still in a playing state and continues to be played, but an end time point which is a preset time length away from the start time point is displayed while the video is played.
And step S504, the terminal acquires the video clip according to the starting time point and the ending time point.
Here, the video content between the start time point and the end time point is determined as the video segments to be collected. In this embodiment of the application, after the start time point and the end time point are determined, a selection button may be displayed on the current interface of the video APP, where the selection button is used to indicate whether the user collects a video clip between the currently determined start time point and end time point. For example, two buttons of "determine collection" and "cancel collection" may be displayed on the current interface of the video APP, and when the user clicks the "determine collection" button, it indicates that the user determines to collect the video clip between the currently determined start time point and end time point, so that the video clip to be collected is determined according to the start time point and end time point, and the subsequent collection process is completed; when the user clicks the 'cancel collection' button, the user is indicated that the user does not want to collect the video segments between the currently determined start time point and end time point, the collection process can be ended, or the user can be reminded whether to re-determine the new start time point and end time point to form a new video segment and collect the new video segment.
And step S505, the terminal sends the attribute information of the video clip to a server.
Here, the attribute information of the video clip includes, but is not limited to, at least one of: the start time point, the end time point, the name of the video segment, the keywords of the video segment, and an identification of the video, wherein the identification of the video includes but is not limited to at least one of: the video source may be a Uniform Resource Locator (URL) of the video, and the video may be acquired through a URL address of the video.
Step S506, the server stores the attribute information of the video clip in a preset storage unit.
Here, after the server receives the attribute information of the video clip, the attribute information of the video clip is stored in a preset storage unit, the preset storage unit is preset for storing the attribute information of the video clip and the video, and the preset storage unit comprises at least one of the attribute information of the video clip and the video collected and stored by the user.
In some embodiments, the preset storage unit may include a favorite list dedicated to favorite video clips and a favorite list dedicated to favorite videos, and then, after the server receives the attribute information, it is first determined whether the attribute information is attribute information on video clips or attribute information on videos, and if the attribute information is attribute information on video clips, the server stores the attribute information of the video clips into the favorite list dedicated to favorite video clips. Therefore, the video clips and the videos are collected separately, and the video clips or the videos can be conveniently and quickly found in the subsequent searching process.
According to the video processing method provided by the embodiment of the application, the terminal receives the collection instruction and the collection finishing instruction of the user, so that the initial collection position and the collection finishing position of the video content concerned by the user are determined according to the instruction input by the user, the video clip required to be collected by the user can be accurately determined, and the video clip concerned by the user can be accurately collected.
Based on fig. 4, referring to fig. 6, fig. 6 is an optional flowchart of the video processing method according to the embodiment of the present application, and after the start time point and the end time point are determined in step S402, the method may further include the following steps:
step S600, receiving an update instruction input by a user.
Here, the update instruction is to instruct to update at least one of the start time point and the end time point.
Step S601, in response to the update instruction, the terminal acquires the video segment according to at least one of the updated start time point and the updated end time point.
Here, the user may request to update at least one of the start time point and the end time point through the update instruction, and therefore, the embodiment of the present application may correspond to the following three scenarios:
scene one: the update instruction is used for indicating to update the starting time point. Then, in step S601, in response to the update instruction, an updated start time point is determined, and the video segment is acquired according to the updated start time point and the end time point.
For example, after the start time point and the end time point are determined, a progress bar of the video, a first position of the start time point in the progress bar of the video, and a second position of the end time point in the progress bar of the video may be displayed on a current interface of the video APP, and then when a user wants to update the start time point, the progress bar may be dragged to change the first position of the start time point in the progress bar, so as to obtain an updated start time point. When the starting time point is located at the starting point of the progress bar of the video, the user can only drag the progress bar backwards, and when the starting time point is located at any position in the middle of the progress bar of the video, the user can drag the progress bar forwards and can also drag the progress bar backwards, wherein the forward direction is a direction closer to the starting playing position of the video, and the backward direction is a direction closer to the ending playing position of the video.
Scene two: the update instruction is used for indicating to update the end time point. Then, in step S601, in response to the update instruction, an updated end time point is determined, and the video segment is acquired according to the start time point and the updated end time point.
For example, after the start time point and the end time point are determined, a progress bar of the video, a first position of the start time point in the progress bar of the video, and a second position of the end time point in the progress bar of the video may be displayed on a current interface of the video APP, and then when the user wants to update the end time point, the progress bar may be dragged to change the second position of the end time point in the progress bar, so as to obtain the updated end time point. When the ending time point is located at the end point of the progress bar of the video, the user can only drag the progress bar forwards, and when the ending time point is located at any position in the middle of the progress bar of the video, the user can drag the progress bar forwards or backwards.
Scene three: the update instruction is used for indicating to update the starting time point and the ending time point. Then, in step S601, in response to the update instruction, an updated start time point and an updated end time point are determined, and the video segment is acquired according to the updated start time point and the updated end time point.
For example, after the start time point and the end time point are determined, the progress bar of the video, the first position of the start time point in the progress bar of the video, and the second position of the end time point in the progress bar of the video may be displayed on the current interface of the video APP, then when the user wants to update the start time point and the end time point, the progress bar may be dragged to change the second position of the end time point in the progress bar after the first position of the start time point in the progress bar is changed, or the progress bar may be dragged to change the first position of the start time point in the progress bar after the second position of the end time point in the progress bar is changed, or the progress bar may be dragged to change the first position of the start time point in the progress bar and the second position of the end time point in the progress bar simultaneously The point in time is at a second position in the progress bar. When the starting time point is located at the starting point of the progress bar of the video, the user can only drag the progress bar backwards, and when the starting time point is located at any position in the middle of the progress bar of the video, the user can drag the progress bar forwards or backwards; when the ending time point is located at the end point of the progress bar of the video, the user can only drag the progress bar forwards, and when the ending time point is located at any position in the middle of the progress bar of the video, the user can drag the progress bar forwards or backwards.
And S404, saving the video clip.
According to the video processing method provided by the embodiment of the application, after the start time point and the end time point are determined, the user can adjust and update the start time point and the end time point as required, so that an updated video clip is obtained, further requirements of the user are met, and user experience is improved.
Based on fig. 5, the attribute information may further include a tag of the video clip and a URL address of the video. Referring to fig. 7, fig. 7 is an optional flowchart of a video processing method according to an embodiment of the present application, and after step S504, the method may further include the following steps:
step S701, the terminal acquires the label of the video clip and the URL address of the video.
Here, the tags of the video clips include at least one of: the name of the video clip, the display picture and the keywords. Here, the name of the video clip may be a custom name input by a user, or a default name automatically generated by a client, or may be generated according to the name of the video, or may directly use the name of the video as the name of the video clip; the display picture can be any one frame of image frame in the video clip, can also be a picture synthesized according to a plurality of needle image frames, can also be a picture customized by a user, and can also be a default picture automatically generated by a client; the keywords may be keywords generated according to the playing content of the video clip, or keywords generated according to the playing content of the video.
In the embodiment of the present application, the tags of the video segments may be obtained in any one of the following manners:
the first method is as follows: and acquiring a label of the video clip input by a user.
Here, the tags of the video clips may be names, presentation pictures, and keywords directly input by the user.
The second method comprises the following steps: and determining the label of the video clip according to the keyword corresponding to the video clip.
Here, it is first required to determine a keyword corresponding to the video segment, and use the determined keyword as a tag of the video segment, and an embodiment of the present application further provides a method for determining a keyword corresponding to a video segment, including the following steps:
step S7011, the image frames of the video clips are identified to obtain video information.
In the embodiment of the application, firstly, a plurality of image frames corresponding to the video segments are determined, and then, image recognition is performed on each image frame to obtain image information and text information corresponding to the image frames; and finally, summarizing and analyzing the image information and the text information corresponding to each image frame to obtain the image information and the text information corresponding to the video clip.
Step S7012, voice recognition is carried out on the video clips to obtain voice information.
Here, the voice of the video segment may be subjected to voice recognition, so as to obtain voice information corresponding to the voice. For example, Artificial Intelligence (AI) techniques can be used to recognize speech in the video segment.
And step S7013, determining keywords corresponding to the video clips according to the video information and the voice information.
Here, AI technology may also be adopted to analyze the video information and the voice information and determine the keywords corresponding to the video clips.
The third method comprises the following steps: setting a predefined tag for the video clip.
Here, when the user does not input a tag of the video clip or does not recognize the video clip using the AI technique, the video clip may also be defaulted to have a predefined tag, for example, a default name of "video clip x" may be set for the video clip.
In some embodiments, step S505 and step S506 may be implemented by:
step S702, the terminal sends the label of the video clip and the URL address of the video to the server.
Step S703, the server stores the tag of the video clip and the URL address of the video in a preset storage unit.
Here, when the server receives the tag of the video clip and the URL address of the video, the tag of the video clip and the URL address of the video are associated to obtain an association relationship, and the tag of the video clip, the URL address of the video, and the association relationship are stored in the preset storage unit together.
According to the video processing method provided by the embodiment of the application, the labels of the video segments and the URL addresses of the videos are stored in the preset storage unit, so that when the video segments need to be searched, the video segments can be determined by searching the labels of the video segments, the URL addresses of the videos forming the video segments are obtained according to the searched labels of the video segments, and the video segments are accurately obtained through the URL addresses.
Based on fig. 4, the number of the video segments to be collected may be one or more, and when there are a plurality of video segments to be collected, referring to fig. 8, step S404 may be implemented by:
step S801, when the video clips to be collected are multiple and the multiple video clips have the same tag, merging the multiple video clips to form a spliced video.
Here, the plurality of video segments having the same label may be that any one of the labels of the video segments is the same, for example, when at least one of the following video segments is the same: the name of the video clip, the display image of the video clip, and the keywords of the video clip may all determine that a plurality of video clips have the same tag.
In the embodiment of the present application, if a plurality of video clips have the same label, it indicates that the video clips belong to the same type of video clip or a video clip related to the same aspect, and therefore the video clips can be merged into a spliced video. When merging a plurality of video segments, the plurality of video segments may be sequentially overlaid, the ending time point of the first video segment is connected with the starting time point of the second video segment, the ending time point of the second video segment is connected with the starting time point of the third video segment … …, and so on until all the video segments are merged.
In some embodiments, when merging the video segments, a prompt box for determining a video splicing order may be sent to the user, in which the user is requested to input a splicing order for each video segment, and after the user inputs the splicing order, all the video segments are sequentially merged according to the splicing order of each video segment input by the user.
Step S802, determining the same label as the label of the spliced video.
Here, when the same label is at least one of: the method comprises the steps of determining the same name of a video clip, the same display image of the video clip or the same keyword of the video clip as a label of the spliced video, and determining the same name of the video clip, the same display image of the video clip or the same keyword of the video clip as the label of the spliced video.
And step S803, storing the label of the spliced video into a preset storage unit.
Here, after the merged videos are merged to form the merged video, the merged video is collected, and the tag of the merged video is stored in a preset storage unit, where the preset storage unit may be a favorite list dedicated for storing the merged video, or a favorite list dedicated for storing video clips.
The video processing method provided by the embodiment of the application can splice a plurality of video clips with the same label to obtain a spliced video, and the embodiment of the application corresponds to the following scenes: when a user collects a plurality of video clips, and the video clips are education videos related to the same knowledge point, the video clips can be spliced into a spliced video, so that when the user searches and views the video related to the knowledge point in the later period, the user can view the spliced video formed by the video clips without interruption. Or, when a plurality of knowledge points are involved in the currently played video, and the same knowledge point is explained in different time periods of the video, a plurality of video segments related to the same knowledge point in the video can be intercepted, and the plurality of video segments are spliced into a spliced video, so that the user can watch the same knowledge point in a connected manner.
Fig. 9 is an optional flowchart of the video processing method according to the embodiment of the present application, and as shown in fig. 9, a method for searching for a video clip is provided, where the searching for a video clip may be performed after the user collects the video clip by himself or may be performed directly on the video clip collected by another person. The method comprises the following steps:
step S901, the terminal receives a search request, where the search request includes search information.
Here, the search information may include, but is not limited to, at least one of: search pictures, search keywords, search names, and the like.
Step S902, in a preset storage unit, acquiring a tag of the video clip matching the search information.
Here, the search information is matched in a preset storage unit, and when a tag corresponding to the search information is matched, a video clip corresponding to the tag is used as a search result of the search request.
And step S903, acquiring the video clip according to the label of the video clip and playing the video clip.
Here, since the tags of the video clips and the URL addresses of the video clips are stored in the preset favorite list, when the tags of the video clips are searched, the URL addresses of the video clips are obtained according to the correspondence between the tags and the URL addresses of the video clips, and the video clips are obtained through the URL addresses and played.
In the embodiment of the present application, a video duration between a start time point of the video segment and an end time point of the video segment is less than a total duration of videos forming the video segment.
According to the video processing method provided by the embodiment of the application, the labels and the URL addresses of the video clips are stored in the preset storage unit, so that the video clips can be directly searched in the preset storage unit and played, a complete video does not need to be searched, the step that a user searches the video clips in the complete video can be reduced, the searching efficiency of the video clips is improved, and the user experience is improved.
In some embodiments, when the video segment is matched according to the search information, the tag for displaying on the current interface may be further redetermined according to the searched video segment, wherein the determination method includes:
step S910, identifying the image frames of the video clip to obtain video information.
Here, the video information includes image information and text information, and in the embodiment of the present application, the manner of determining the video information is the same as the manner of determining the video information in step S7011, except that the determination of the video information in step S7011 is to determine a tag of a video clip to be collected, and the determination of the video information is to determine a tag of a searched video clip.
And step S911, performing voice recognition on the video clip to obtain voice information.
Here, AI techniques may be employed to recognize speech in the video segment.
Step S912, determining keywords corresponding to the video segment according to the video information and the voice information.
Here, AI technology may also be adopted to analyze the video information and the voice information and determine the keywords corresponding to the video clips.
Step S913, displaying the label of the video segment determined according to the keyword on the current interface.
In the embodiment of the application, after the tag of the video segment is determined, the tag is displayed on the current interface, so that a user can determine whether the video segment is a video segment that the user wants to watch.
In some embodiments, a plurality of video clips may be matched through the search information, that is, there may be a plurality of video clips corresponding to the same search information, and at this time, the searched video clips may be displayed, where the display method includes the following two methods:
the display mode is as follows: step S920, when the tag matched with the search information corresponds to a plurality of video segments, respectively displaying a plurality of tags of each of the plurality of video segments on a current interface.
And a second display mode: and step S921, merging the video segments to form a spliced video, and displaying the matched label on the current interface.
Here, when merging a plurality of video segments, the following four ways can be adopted:
first, according to the sequence of searched video clips, the video clip searched first is merged with the video clip searched later.
Secondly, the matching degree between each video segment in the plurality of video segments and the search information is determined, and the video segments with high matching degree are merged before the video segments with low matching degree.
Thirdly, a plurality of video clips are overlapped in sequence in a random overlapping mode.
Fourthly, when the video clips are combined, a reminding frame used for determining the splicing sequence of the video clips is sent to a user, the user is requested to input the splicing sequence of each video clip in the reminding frame, and after the user inputs the splicing sequence, all the video clips are combined in sequence according to the splicing sequence of each video clip input by the user.
In other embodiments, the video clip may also be downloaded, and the step of downloading the video clip includes:
step S930, receiving a download instruction, where the download instruction includes an identifier of the video.
Here, the identification of the video includes a tag of the video clip.
In step S931, in a preset storage unit, the URL address of the video is obtained according to the identifier of the video.
And step S932, acquiring and downloading the video clip according to the URL address of the video.
According to the video processing method provided by the embodiment of the application, when the video is downloaded, the whole video does not need to be downloaded, only the video clip concerned by the user or the video clip considered as important by the user is downloaded, the downloading efficiency can be improved, the flow and the cost are greatly saved, and when the downloaded video clip is watched by the subsequent user, the clip corresponding to the important content can be directly watched, the user does not need to search the video clip to be watched in the whole video, so that the user experience is further improved.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The embodiment of the application provides a video processing method, through the method, a user can directly store a certain segment in a video to a favorite on line, and after the user collects a single video segment, the user can directly click the video segment to play without searching and seeking a little, and meanwhile, the user can also directly define the name of the video segment when collecting the video, so that the subsequent searching is facilitated.
Fig. 10A is an alternative interface diagram of the video processing method according to the embodiment of the present application, and as shown in fig. 10A, there is an action trigger button 1011 for "collecting video clips" at a suitable position on the video playing interface 1010. When the user clicks the action trigger button 1011, a progress bar 1012 and a default selected progress (starting point a to ending point B) are displayed on the video, as shown in fig. 10B, point a (corresponding to the starting time point) is the current playing progress of the video by default, if the current video is not played, the current playing progress is located at 0 point 0 minutes and 0 seconds, point B (corresponding to the ending time point) is located at a preset time (for example, 15 seconds) away from point a by default, and if the distance from point a to the end of the video is less than 15 seconds, point B is located at the end of the video.
Referring to fig. 10B, the user may change the positions of the points a and B by clicking and dragging, and the video clips to be collected are selected between the points a and B and are highlighted. Meanwhile, an input box 1013 is displayed below the video, and a user may input a name of a video clip in the input box 1013, and if not, a default name of "video clip x" is given.
With continued reference to fig. 10B, a confirm button 1014 for the "confirm favorites" and a cancel favorites button 1015 for the "cancel" may also be displayed below the video. When the user clicks the ok button 1014, the video clip is collected to the edge of the collection list, and when the user clicks the cancel collection button 1015, the video clip selection interface is exited.
In the embodiment of the present application, the collected video clips are displayed in the collection list 1016 in the manner of displaying pictures and names of the video clips, as shown in fig. 10C, and at the same time, names of the original videos are displayed.
When a user searches for a video clip, the name of the video clip and the name of the video may also be searched. After the user clicks the searched video segment, the user directly enters the playing interface, as shown in fig. 10D, the selected video segment is highlighted and positioned at the starting point of the previously selected video segment to start playing, and of course, the user may also play other contents of the video except the selected video segment by dragging the progress bar.
Fig. 11 is an alternative flowchart of a video processing method according to an embodiment of the present application, and as shown in fig. 11, the method includes the following steps:
in step S1101, the user plays the video through the client on the terminal.
In step S1102, when the user clicks the "collect video clip" button, the video is paused, and a progress bar and a default selection progress (starting point a to ending point B) are displayed on the video.
Here, the point a is by default the current playing progress of the video, and is located at a position 0 minutes and 0 seconds from the point 0 if the current video is not played yet, the point B is by default a position (for example, 15 seconds) a preset time period away from the point a, and is located at the end of the video if the point a is less than 15 seconds away from the end of the video. The selected video clip to be collected is between the point a and the point B, and the embodiment of the present invention may highlight the selected video clip, such as highlighting or displaying with different color bars.
In step S1103, the user may change the positions of the points a and B by circularly clicking and dragging.
In step S1104, when the user clicks the "cancel" button, all selection-related elements are hidden, and execution returns to step S1101.
In step S1105, when the user clicks the "confirm favorites" button, it is determined whether the time period between the point a and the point B is less than a certain time period (for example, 5 seconds).
If yes, go to S1106; when the judgment result is no, S1107 is executed.
In step S1106, it is prompted that the selected segment duration needs to be above a specific duration (e.g., 5 seconds).
Step S1107, determines whether the user has input the name of the video clip.
If yes, go to S1108; when the determination result is no, S1110 is performed.
Step S1108, sending the address of the video, the start time point (i.e., point a) of the video segment, the end time point (i.e., point B) of the video segment, the name of the video segment, and other attribute information to the server.
Step S1109, the server stores the attribute information in a preset storage unit.
Here, the preset storage unit may be a favorite list for favorite video clips, or any other storage unit capable of storing video clips.
In step S1110, a default name of the video clip is given. For example, the default name may be "video clip x".
In some embodiments, the method may further comprise a step of retrieving the video segment, including:
in step S1111, after the search is started, when it is detected that the user clicks a video clip in the favorite list, the client requests the server for attribute information of the video clip.
In step S1112, the server issues attribute information to the terminal.
Step S1113, locate the starting point of the video segment and start playing the video segment.
According to the video processing method provided by the embodiment of the application, the user can conveniently intercept the video clip which the user wants to collect from the online video, the subsequent retrieval is convenient, and the user experience of the user for collecting the video is improved. Especially for videos needing to be watched repeatedly, such as tutorial products and educational products, the collection of important segments can bring great improvement on user experience effect and public praise for the products.
According to the method of the embodiment of the application, the length of the video clip can be the user-defined time length, and the shortest time length can not be limited; only one video clip can be intercepted in one video, and a plurality of video clips can be intercepted and collected at the same time; during retrieval, the titles of the original videos and the titles of the video clips can be retrieved, so that a user can conveniently search the titles; the title of the video clip can be a piece of text or can be labeled in a keyword mode.
Continuing with the exemplary structure of the video processing device 354 implemented as a software module provided in the embodiments of the present application, in some embodiments, as shown in fig. 3, the software module stored in the video processing device 354 of the memory 350 may be a video processing device in the terminal 100-1, including:
a first receiving module 3541, configured to receive a video collection instruction;
a response module 3542 for determining a start time point and an end time point corresponding to the video segments to be collected; wherein the duration of the video clips to be collected is less than the total duration of the video;
a first obtaining module 3543, configured to obtain the video segment according to the start time point and the end time point;
a saving module 3544, configured to save the video clip.
In some embodiments, the response module is further to: in the playing process of the video, determining the playing time point when the collection instruction is received as the starting time point; after the start time point, an end time point corresponding to the start time point is determined.
In some embodiments, the apparatus further comprises: the third receiving module is used for receiving a collection finishing instruction; correspondingly, the response module is further configured to: and after the starting time point, determining the playing time point when the collection finishing instruction is received as the finishing time point.
In some embodiments, the response module is further to: and determining the playing time point which is a preset time length away from the starting time point as the ending time point.
In some embodiments, the apparatus further comprises: a fourth receiving module, configured to receive an update instruction, where the update instruction is used to instruct to update at least one of the start time point and the end time point; and the second response module is used for responding to the updating instruction and acquiring the video clip according to at least one of the updated starting time point and the updated ending time point.
In some embodiments, the save module is further to: acquiring a label of the video clip and a URL (uniform resource locator) address of the video; and sending the label of the video clip and the URL address of the video to a server so that the server stores the label of the video clip and the URL address of the video in a preset storage unit.
In some embodiments, the save module is further to: acquiring a label of the input video clip; or determining the label of the video clip according to the keyword corresponding to the video clip; alternatively, the video clip is provided with a predefined tag.
In some embodiments, the apparatus further comprises: the image identification module is used for identifying the image frames of the video clips to obtain video information; the voice recognition module is used for carrying out voice recognition on the video clip to obtain voice information; and the determining module is used for determining the keywords corresponding to the video clips according to the video information and the voice information.
In some embodiments, the save module is further to: when the number of the video clips to be collected is multiple and the multiple video clips have the same label, combining the multiple video clips to form a spliced video; determining the same label as the label of the spliced video; and storing the label of the spliced video into a preset storage unit.
In some embodiments, the tags of the video segments include at least one of: the name of the video clip, the display picture and the keywords.
In other embodiments, the video processing device 354 may also be a video processing device in the terminal 100-2, and it may also be software in the form of programs and plug-ins, and includes the following software modules: a second receiving module, configured to receive a search request, where the search request includes search information; the second acquisition module is used for acquiring the label of the video clip matched with the search information in a preset storage unit; the third acquisition module is used for acquiring and playing the video clip according to the label of the video clip; wherein a video duration between a start time point of the video segment and an end time point of the video segment is less than a total duration of videos forming the video segment.
In some embodiments, the apparatus further comprises: a fifth receiving module, configured to receive a download instruction, where the download instruction includes an identifier of the video; a fourth obtaining module, configured to obtain, in the preset storage unit, a URL address of the video according to the identifier of the video; and the fifth acquisition module is used for acquiring and downloading the video clip according to the URL address of the video.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a storage medium having stored therein executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 4.
In some embodiments, the storage medium may be a Ferroelectric Random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a charged erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), etc.; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A video processing method, comprising:
receiving a video collection instruction;
responding to the collection instruction, and determining a starting time point and an ending time point corresponding to the video clips to be collected; wherein the duration of the video clips to be collected is less than the total duration of the video;
acquiring the video clip according to the starting time point and the ending time point; and
and saving the video clip.
2. The method of claim 1, wherein determining a start time point and an end time point corresponding to video segments to be collected in response to the collection instruction comprises:
in the playing process of the video, determining the playing time point when the collection instruction is received as the starting time point;
after the start time point, an end time point corresponding to the start time point is determined.
3. The method of claim 2, further comprising: receiving a collection finishing instruction;
correspondingly, the determining an end time point corresponding to the start time point after the start time point includes:
and after the starting time point, determining the playing time point when the collection finishing instruction is received as the finishing time point.
4. The method of claim 2, wherein determining an end time point corresponding to the start time point after the start time point comprises:
and determining the playing time point which is a preset time length away from the starting time point as the ending time point.
5. The method of claim 1, further comprising:
receiving an update instruction, wherein the update instruction is used for indicating to update at least one of the starting time point and the ending time point;
and responding to the updating instruction, and acquiring the video clip according to at least one of the updated starting time point and the updated ending time point.
6. The method of claim 1, wherein saving the video clip comprises:
acquiring a label of the video clip and a URL (uniform resource locator) address of the video;
and sending the label of the video clip and the URL address of the video to a server so that the server stores the label of the video clip and the URL address of the video in a preset storage unit.
7. The method of claim 6, wherein the obtaining the label of the video clip comprises at least one of the following steps:
acquiring a label of the input video clip; alternatively, the first and second electrodes may be,
determining a label of the video clip according to the keyword corresponding to the video clip; alternatively, the first and second electrodes may be,
setting a predefined tag for the video clip.
8. The method of claim 7, further comprising:
identifying the image frames of the video clips to obtain video information;
carrying out voice recognition on the video clip to obtain voice information;
and determining the keywords corresponding to the video clips according to the video information and the voice information.
9. The method of claim 1, wherein saving the video clip comprises:
when the number of the video clips to be collected is multiple and the multiple video clips have the same label, combining the multiple video clips to form a spliced video;
determining the same label as the label of the spliced video;
and storing the label of the spliced video into a preset storage unit.
10. The method of claim 9, wherein the tags of the video segments comprise at least one of: the name of the video clip, the display picture and the keywords.
11. A video processing method, comprising:
receiving a search request, wherein the search request comprises search information;
acquiring a label of a video clip matched with the search information in a preset storage unit;
acquiring and playing the video clip according to the label of the video clip;
wherein a video duration between a start time point of the video segment and an end time point of the video segment is less than a total duration of videos forming the video segment.
12. The method of claim 11, further comprising:
receiving a downloading instruction, wherein the downloading instruction comprises an identifier of the video;
in the preset storage unit, acquiring a URL (uniform resource locator) address of the video according to the identification of the video;
and acquiring and downloading the video clip according to the URL address of the video.
13. A video processing apparatus, comprising:
the first receiving module is used for receiving a video collection instruction;
the response module is used for responding to the collection instruction and determining a starting time point and an ending time point corresponding to the video clips to be collected; wherein the duration of the video clips to be collected is less than the total duration of the video;
a first obtaining module, configured to obtain the video segment according to the start time point and the end time point;
and the storage module is used for storing the video clips.
14. A video processing apparatus, comprising:
a second receiving module, configured to receive a search request, where the search request includes search information;
the second acquisition module is used for acquiring the label of the video clip matched with the search information in a preset storage unit;
the third acquisition module is used for acquiring and playing the video clip according to the label of the video clip;
wherein a video duration between a start time point of the video segment and an end time point of the video segment is less than a total duration of videos forming the video segment.
15. A video processing apparatus, comprising:
a memory for storing executable instructions; a processor for implementing the method of any one of claims 1 to 10, or 11 or 12, when executing executable instructions stored in the memory.
CN201910995388.6A 2019-10-18 2019-10-18 Video processing method, device, equipment and storage medium Pending CN110650375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910995388.6A CN110650375A (en) 2019-10-18 2019-10-18 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910995388.6A CN110650375A (en) 2019-10-18 2019-10-18 Video processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110650375A true CN110650375A (en) 2020-01-03

Family

ID=68994305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910995388.6A Pending CN110650375A (en) 2019-10-18 2019-10-18 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110650375A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182328A (en) * 2020-02-12 2020-05-19 北京达佳互联信息技术有限公司 Video editing method, device, server, terminal and storage medium
CN111930750A (en) * 2020-08-28 2020-11-13 支付宝(杭州)信息技术有限公司 Method and device for carrying out evidence storage on evidence obtaining process video clip
CN112135187A (en) * 2020-07-30 2020-12-25 广州华多网络科技有限公司 Multimedia data generation method, interception method, device, equipment and storage medium
CN112738554A (en) * 2020-12-22 2021-04-30 北京百度网讯科技有限公司 Video processing method and device and electronic equipment
CN113051233A (en) * 2021-03-30 2021-06-29 联想(北京)有限公司 Processing method and device
CN113220935A (en) * 2021-05-28 2021-08-06 杭州海康威视系统技术有限公司 Video data storage and query method and device
WO2022001437A1 (en) * 2020-06-28 2022-01-06 中兴通讯股份有限公司 Multimedia file playback control method and apparatus, terminal device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160275989A1 (en) * 2015-03-16 2016-09-22 OZ ehf Multimedia management system for generating a video clip from a video file
CN108156528A (en) * 2017-12-18 2018-06-12 北京奇艺世纪科技有限公司 A kind of method for processing video frequency and device
CN108966016A (en) * 2018-08-29 2018-12-07 北京奇艺世纪科技有限公司 A kind of method, apparatus and terminal device of video clip replay
CN109194979A (en) * 2018-10-30 2019-01-11 努比亚技术有限公司 The processing method and processing device of audio-video, mobile terminal, readable storage medium storing program for executing
CN109905780A (en) * 2019-03-30 2019-06-18 山东云缦智能科技有限公司 A kind of video clip sharing method and Intelligent set top box
CN110290397A (en) * 2019-07-18 2019-09-27 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160275989A1 (en) * 2015-03-16 2016-09-22 OZ ehf Multimedia management system for generating a video clip from a video file
CN108156528A (en) * 2017-12-18 2018-06-12 北京奇艺世纪科技有限公司 A kind of method for processing video frequency and device
CN108966016A (en) * 2018-08-29 2018-12-07 北京奇艺世纪科技有限公司 A kind of method, apparatus and terminal device of video clip replay
CN109194979A (en) * 2018-10-30 2019-01-11 努比亚技术有限公司 The processing method and processing device of audio-video, mobile terminal, readable storage medium storing program for executing
CN109905780A (en) * 2019-03-30 2019-06-18 山东云缦智能科技有限公司 A kind of video clip sharing method and Intelligent set top box
CN110290397A (en) * 2019-07-18 2019-09-27 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182328A (en) * 2020-02-12 2020-05-19 北京达佳互联信息技术有限公司 Video editing method, device, server, terminal and storage medium
CN111182328B (en) * 2020-02-12 2022-03-25 北京达佳互联信息技术有限公司 Video editing method, device, server, terminal and storage medium
WO2022001437A1 (en) * 2020-06-28 2022-01-06 中兴通讯股份有限公司 Multimedia file playback control method and apparatus, terminal device, and storage medium
CN112135187A (en) * 2020-07-30 2020-12-25 广州华多网络科技有限公司 Multimedia data generation method, interception method, device, equipment and storage medium
CN111930750A (en) * 2020-08-28 2020-11-13 支付宝(杭州)信息技术有限公司 Method and device for carrying out evidence storage on evidence obtaining process video clip
CN112738554A (en) * 2020-12-22 2021-04-30 北京百度网讯科技有限公司 Video processing method and device and electronic equipment
CN112738554B (en) * 2020-12-22 2022-12-13 北京百度网讯科技有限公司 Video processing method and device and electronic equipment
CN113051233A (en) * 2021-03-30 2021-06-29 联想(北京)有限公司 Processing method and device
CN113220935A (en) * 2021-05-28 2021-08-06 杭州海康威视系统技术有限公司 Video data storage and query method and device

Similar Documents

Publication Publication Date Title
CN110650375A (en) Video processing method, device, equipment and storage medium
US10547571B2 (en) Message service providing method for message service linked to search service and message server and user terminal to perform the method
US10739958B2 (en) Method and device for executing application using icon associated with application metadata
CN102520841B (en) Collection user interface
US8117545B2 (en) Hosted video discovery and publishing platform
US9871841B2 (en) Media enhancement mechanism using embed code
CN103268207B (en) For playing up the method and system of the video watched on multiple window
US20110289437A1 (en) Methods and systems for shareable virtual devices
US20150120816A1 (en) Tracking use of content of an online library
WO2022205772A1 (en) Method and apparatus for displaying page element of live-streaming room
US20100332512A1 (en) System and method for creating and manipulating thumbnail walls
CN104572846A (en) Method, device and system for recommending hot words
US20140298249A1 (en) Method and device for displaying service page for executing application
JP6019285B2 (en) Electronic book reproduction device, history acquisition device, electronic book generation device, electronic book provision system, electronic book reproduction method, history acquisition method, electronic book generation method, electronic book reproduction program, history acquisition program, electronic book generation program
CN111343074B (en) Video processing method, device and equipment and storage medium
CN110110101B (en) Multimedia content recommendation method, device, terminal, server and readable medium
CN104246755A (en) Method and system to provide video-based search results
CN105745650A (en) Device and method for predicting skin age by using quantifying means
CN113553466A (en) Page display method, device, medium and computing equipment
CN114154000A (en) Multimedia resource publishing method and device
WO2024093767A1 (en) Microscopic image sharing method and microscope system
WO2010150104A2 (en) System and method for creating and manipulating thumbnail walls
TWI515684B (en) Method and apparatus for message processing and system thereof
US10194128B2 (en) Systems and processes for generating a digital content item
US20140178035A1 (en) Communicating with digital media interaction bundles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40019400

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103

RJ01 Rejection of invention patent application after publication