CN111263220A - Video processing method and device, electronic equipment and computer readable storage medium - Google Patents

Video processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111263220A
CN111263220A CN202010042959.7A CN202010042959A CN111263220A CN 111263220 A CN111263220 A CN 111263220A CN 202010042959 A CN202010042959 A CN 202010042959A CN 111263220 A CN111263220 A CN 111263220A
Authority
CN
China
Prior art keywords
uploading
video data
synthesized
data
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010042959.7A
Other languages
Chinese (zh)
Other versions
CN111263220B (en
Inventor
陶海庆
严冰
宫昀
张聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010042959.7A priority Critical patent/CN111263220B/en
Publication of CN111263220A publication Critical patent/CN111263220A/en
Application granted granted Critical
Publication of CN111263220B publication Critical patent/CN111263220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a video processing method and device, electronic equipment and a computer readable storage medium, and relates to the field of internet. The method comprises the following steps: receiving an uploading instruction aiming at a target video; synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server; and repeating the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to a preset server until all the synthesized video data of the target video are acquired and all the synthesized video data are uploaded to the preset server. The method and the device for synthesizing and uploading the target video realize the synthesis and uploading of the target video, thereby reducing the total time of synthesis and uploading and improving the user experience.

Description

Video processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of networks, more and more users like to make videos by themselves and upload the videos to the networks to share the videos with others. In the process of self-making videos, video synthesis is generally required, that is, one or more videos are edited, then a new video is generated, and then the new video is uploaded to a network for sharing.
Therefore, the time (not including editing) for sharing a homemade video by the user is the time for synthesizing the video and the uploading time of a new video, the time consumption is long, and especially when the video is large, the terminal performance is poor, and the network is poor, the time consumption is long, and the user experience is poor.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The disclosure provides a video processing method, a video processing device, an electronic device and a computer readable storage medium, which can solve the problem that a user spends a long time when uploading a self-made video. The technical scheme is as follows:
in a first aspect, a method for processing a video is provided, where the method includes:
receiving an uploading instruction aiming at a target video;
synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
and repeating the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to a preset server until the synthesized video data of all the target videos are acquired and the synthesized video data are uploaded to the preset server.
In a second aspect, there is provided an apparatus for processing a video, the apparatus comprising:
the receiving module is used for receiving an uploading instruction aiming at a target video;
the processing module is used for synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server;
and repeatedly calling the processing module until all synthesized video data of the target video are obtained, and uploading the synthesized video data to the preset server.
In a third aspect, an electronic device is provided, which includes:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is configured to, by invoking the operation instruction, cause the processor to perform an operation corresponding to the video processing method according to the first aspect of the disclosure.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the video processing method shown in the first aspect of the present disclosure.
The technical scheme provided by the disclosure has the following beneficial effects:
in the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video processing method according to another embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating interaction between an application client and an editing SDK and uploading SDK according to the present disclosure;
fig. 4 is a schematic structural diagram of a video processing apparatus according to yet another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device for processing a video according to yet another embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information interacted among a plurality of devices in the embodiments of the present disclosure are only for illustrative purposes, and are not intended to limit the scope of the messages or information, which is intended to solve the above technical problems of the prior art.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
In one embodiment, a method for processing video is provided, as shown in fig. 1, the method includes:
step S101, receiving an uploading instruction aiming at a target video;
the embodiment of the disclosure can be applied to a terminal, an application client having functions of synthesizing videos and uploading the synthesized videos can be installed in the terminal, and a user can edit target videos, such as cutting, adding special effects and the like, in the application client, and then synthesize the edited videos to obtain the edited videos. The target video may be a video to be synthesized, the video to be synthesized may be a video already stored in the terminal, may also be a video shot by the user through the application client, or may also be another video, which is not limited in this disclosure.
Step S102, synthesizing a target video based on an uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
in the disclosed embodiment, two SDKs (Software Development Kit) may be set: editing the SDK and uploading the SDK. The editing SDK and the uploading SDK respectively perform data interaction with the application program client in a buffer mode, receive scheduling of the application program client, edit the SDK for producing data, namely synthesizing the target video, and upload the SDK for consuming data, namely acquiring synthesized video data in real time and uploading the synthesized video data.
And step S103, repeatedly executing the steps of synthesizing the video data in real time and synchronously uploading the synthesized video data to a preset server until the synthesized video data of all the target videos are obtained and the synthesized video data are uploaded to the preset server.
Since the acquisition of the synthesized video data is real-time and the synchronous uploading of the synthesized video data to the server is also real-time, the acquisition of the synthesized video data and the synchronous uploading of the synthesized video data to the server need to be repeated until the target video is completely synthesized to obtain complete synthesized video data and the complete synthesized video data is completely uploaded to the server.
In the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
In another embodiment, a method for processing video is provided, as shown in fig. 2, the method includes:
step S201, receiving an uploading instruction aiming at a target video;
the embodiment of the disclosure can be applied to a terminal, an application client having functions of synthesizing videos and uploading the synthesized videos can be installed in the terminal, and a user can edit target videos, such as cutting, adding special effects and the like, in the application client, and then synthesize the edited videos to obtain the edited videos. The target video may be a video to be synthesized, the video to be synthesized may be a video already stored in the terminal, may also be a video shot by the user through the application client, or may also be another video, which is not limited in this disclosure.
Wherein, this terminal station can have following characteristics:
(1) on a hardware architecture, a device has a central processing unit, a memory, an input unit and an output unit, that is, the device is often a microcomputer device having a communication function. In addition, various input modes such as a keyboard, a mouse, a touch screen, a microphone, a camera and the like can be provided, and input can be adjusted as required. Meanwhile, the equipment often has a plurality of output modes, such as a telephone receiver, a display screen and the like, and can be adjusted according to needs;
(2) on a software system, the device must have an operating system, such as Windows Mobile, Symbian, Palm, Android, iOS, and the like. Meanwhile, the operating systems are more and more open, and personalized application programs developed based on the open operating system platforms are infinite, such as a communication book, a schedule, a notebook, a calculator, various games and the like, so that the requirements of personalized users are met to a great extent;
(3) in terms of communication capacity, the device has flexible access mode and high-bandwidth communication performance, and can automatically adjust the selected communication mode according to the selected service and the environment, thereby being convenient for users to use. The device can support GSM (Global System for Mobile Communication), WCDMA (Wideband Code Division Multiple Access), CDMA2000(Code Division Multiple Access), TDSCDMA (Time Division-Synchronous Code Division Multiple Access), Wi-Fi (Wireless-Fidelity), WiMAX (world interoperability for Microwave Access), etc., thereby adapting to various systems of networks, not only supporting voice service, but also supporting various Wireless data services;
(4) in the aspect of function use, the equipment focuses more on humanization, individuation and multi-functionalization. With the development of computer technology, devices enter a human-centered mode from a device-centered mode, and the embedded computing, control technology, artificial intelligence technology, biometric authentication technology and the like are integrated, so that the human-oriented purpose is fully embodied. Due to the development of software technology, the equipment can be adjusted and set according to individual requirements, and is more personalized. Meanwhile, the device integrates a plurality of software and hardware, and the function is more and more powerful.
In practical application, a publishing page is preset in the application program client, a functional button, such as a "publishing" button, for synthesizing a target video and uploading the synthesized video can be arranged in the publishing page, when a user selects the target video in the publishing page and then clicks the "publishing" button, the application program client can synthesize the target video and upload the synthesized data in real time, so that the synthesis and uploading of the synthesized data are realized.
Step S202, synthesizing a target video based on an uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
in the disclosed embodiment, two SDKs (Software Development Kit) may be set: editing the SDK and uploading the SDK. As shown in fig. 3, the editing SDK and the uploading SDK perform data interaction with the application client in a buffer form, respectively, receive scheduling of the application client, edit the SDK for producing data, that is, synthesize the target video, upload the SDK for consuming data, that is, obtain synthesized video data in real time, and upload the synthesized video data.
In a preferred embodiment of the present disclosure, after synthesizing the target video, the method further includes:
storing the synthesized video data to a preset storage space;
the method comprises the steps of acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server, and comprises the following steps:
acquiring synthesized video data from a storage space;
and uploading the synthesized video data to a preset server.
Specifically, after receiving the upload instruction, the application client may call the editing SDK and the upload SDK at the same time. When the editing SDK is called to synthesize the target video, the synthesized video data can be transmitted to the application program client in real time in a buffer mode, and the application program client receives the video data and then stores the video data to a preset storage space, such as caching. When the uploading SDK is called, the uploading SDK can acquire the synthesized video data from the storage space in a buffer mode and upload the acquired video data to a preset server.
Further, each time the editing SDK sends a copy of the synthesized video data to the client, a notification is sent to the uploading SDK, and after receiving the notification, the uploading SDK can go to the storage space to obtain the video data which is not uploaded.
In a preferred embodiment of the present disclosure, synthesizing a target video includes:
acquiring target video data of a target video;
synthesizing the target video data to obtain corresponding synthesized data, and marking the data size of the synthesized data and the offset of the synthesized data; the offset is used to characterize the position offset of the synthesized data in the target video.
Specifically, when the SDK composite video data is edited, there is a composite rate, for example, 2M size data is composited per second, and the composite rate is affected by factors such as hardware performance of the terminal, a composite algorithm, and the like. The editing SDK may obtain target video data with a composite rate from a target video, then composite the target video data to obtain corresponding composite data, and mark a data size of the composite data and an offset of the composite data.
The offset is used for representing the position offset of the synthesized data in the video to be uploaded. Wherein the location offset may be an address offset of the data. For example, the composition rate of the editing SDK is 2M per second, and at the 1 st second, the editing SDK obtains 2M data in the target video, and composites the 2M data to obtain corresponding composited data, and then marks that the size of the composited data is 2M, and the composited data is the 0 th to 2 nd M data in the video to be uploaded.
Alternatively, the positional offset may be a time offset of the data. For example, the composition rate of the editing SDK is 2M per second, and at the 1 st second, the editing SDK obtains 2M data in the target video, and performs composition on the 2M data to obtain corresponding synthesized data, and then marks that the synthesized data size is 2M, and the synthesized data is the data of the 0 th to 1 st seconds in the video to be uploaded.
Of course, other types of offsets are also applicable to the embodiment of the present disclosure, and may be set according to actual requirements in practical applications, which is not limited by the embodiment of the present disclosure.
Furthermore, after continuously receiving multiple copies of synthesized video data, the application client can splice the received multiple copies of synthesized video data, so that the synthesized video data can be conveniently subjected to fragment processing. For example, the composition rate of the editing SDK is 2M per second, and after 3 seconds, the application client receives 3 parts of 2M composited video data, so that the application client can sequentially splice the 3 parts of video data based on the offset of the 3 parts of video data to obtain one part of 6M composited video data.
Still further, the embodiments of the present disclosure may determine whether the composition is abnormal based on an offset of the video data that has been composed. Specifically, in practical applications, the editing SDK may be synthesized according to the sequence of the target video data in the target video, so that if there is no abnormal synthesis, the offset of the synthesized video data is not beyond the range of the target video, and if the offset of the synthesized video data received by the application client is beyond the range of the target video, it may be determined that the synthesis has failed, and at this time, the uploading SDK is notified to terminate the uploading.
In a preferred embodiment of the present disclosure, before the step of uploading the video data to the preset server, the method further includes:
carrying out fragment processing on the synthesized video data, and marking the ID and uploading state of each fragment; the upload status includes any one of uploaded and not uploaded;
the method for uploading the video data to the preset server comprises the following steps:
determining the ID of the fragment which is not uploaded in the uploading state;
acquiring video data of the uploaded fragments according to the IDs of the uploaded fragments;
uploading the video data of the uploaded fragments to a server;
when the uploading is successful, the fragmentation state is changed from the non-uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading failure.
The uploading SDK needs to decouple file input because it involves logic such as fragment uploading, fragment retry, dynamic adjustment of fragment size, etc., so an on-demand claim scheme is employed, each fragment including fragment ID, video data, data size of video data, and uploading status. The client of the application program needs to record the fragment ID transmitted to the SDK for data, and when the needed fragment ID does not exist, the client continues to read the video data and record the fragment at the end of the last fragment; when the required fragment ID exists, the video data corresponding to the fragment is read.
Specifically, the application program client divides the spliced synthesized data into pieces, marks the ID and the uploading state of each piece, acquires the uploading state of each piece according to the sequence of the IDs when the SDK is uploaded to acquire the pieces, acquires the video data corresponding to the pieces of which the uploading state is not uploaded, and uploads the video data of the pieces of which the uploading state is not uploaded to the server, wherein when the uploading is successful, the piece state is changed from the uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading failure.
For example, at the current moment, the application client has spliced to obtain 6M video data, then the 6M video data is fragmented based on the size of 1M to obtain 6 fragments, the IDs and the uploading states of the 6 fragments are respectively marked, when the video data is obtained by uploading the SDK, the uploading state of each fragment is obtained according to the fragment ID, then the video data of the fragment with the uploading state being not uploaded is uploaded to the server, and when the uploading is successful, the uploading state of the fragment is changed to the uploaded state.
Further, in practical application, when the server receives the fragment, it will return a message of successful transmission, and when the terminal receives the message, it can determine that the upload is successful. If uploading of a certain fragment fails, the whole fragment can be retransmitted.
Further, when the application program client side fragments the video data, verification information of the fragments can be generated based on the video data of the fragments, the verification information and the fragments are uploaded to the server together, after the server receives the fragments, the verification information can be generated based on the video data of the fragments, the generated verification is compared with the verification information uploaded by the uploading SDK, if the verification information and the verification information are completely the same, the fragments can be judged to be correct, otherwise, the fragments can be judged to be wrong, the fragments need to be retransmitted, and the terminal is informed to retransmit the fragments.
In practical application, a plurality of fragments may be uploaded in a serial manner, for example, sequentially according to the IDs of the fragments, or in a parallel manner, for example, simultaneously through a plurality of threads.
Step S203, the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to a preset server are repeatedly executed until the synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server;
since the acquisition of the synthesized video data is real-time and the synchronous uploading of the synthesized video data to the server is also real-time, the acquisition of the synthesized video data and the synchronous uploading of the synthesized video data to the server need to be repeated until the target video is completely synthesized to obtain complete synthesized video data and the complete synthesized video data is completely uploaded to the server.
Step S204, when the synthesized video data of all the target videos are obtained, generating file header data of the target videos;
and step S205, uploading the header data to a preset server.
In practical applications, a complete video file includes two parts, i.e., a file header and a file body, data in the file body is complete video data, and data in the file header is related information of the video file, including an index of an offset. Specifically, after the editing SDK completes the synthesis of all the target videos, the offset of all the video data can be known, so that after the complete synthesized video data of the target videos is obtained, the header data of the target videos is generated, and then the header data is sent to the application client. When the application program client receives the header data, if the fragments which are not uploaded exist, the fragments which are not uploaded are continuously uploaded until all the fragments are completely uploaded, and then the header data is uploaded; and if all the fragments are completely uploaded, uploading header data. And after receiving the header data, the server splices the header data and each fragment to obtain a synthesized video. Thus, the total time for composition and uploading is almost the time consuming of the two, but not the time consuming sum of the two, for example, 8 seconds are required for composition of the target video, and 12 seconds are required for uploading the composited video, and in the embodiment of the present disclosure, 12-13 seconds are required for composition and uploading instead of 20 seconds; for another example, the target video needs 20 seconds for composition, and the post-composition video needs 10 seconds for uploading, so in the embodiment of the present disclosure, the composition and uploading need 20 to 21 seconds in total, instead of 30 seconds.
Further, in practical applications, the editing SDK may be synthesized according to the sequence of the target video data in the target video, so if there is no abnormal synthesis, the offset of the synthesized video data does not exceed the range of the target video, and if the synthesized video data received by the application client covers video data other than the header data, it may also be determined that the synthesis fails, and at this time, the uploading SDK is notified to terminate the uploading.
In the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
Fig. 4 is a schematic structural diagram of a video processing apparatus according to still another embodiment of the present disclosure, and as shown in fig. 4, the apparatus of this embodiment may include:
a receiving module 401, configured to receive an upload instruction for a target video;
the processing module 402 is configured to synthesize a target video based on an upload instruction, acquire synthesized video data in real time, and upload the synthesized video data to a preset server in synchronization;
and repeatedly calling the processing module until the synthesized video data of all the target videos are obtained and all the synthesized video data are uploaded to a preset server.
In a preferred embodiment of the present disclosure, the method further includes:
the generating module is used for generating file header data of the target video when the synthesized video data of all the target videos are obtained;
and the processing module is also used for uploading the file header data to a preset server.
In a preferred embodiment of the present disclosure, the method further includes:
the storage module is used for storing the synthesized video data into a preset storage space after synthesizing the target video;
the processing module comprises:
a first obtaining sub-module, configured to obtain synthesized video data from a storage space;
and the uploading sub-module is used for uploading the synthesized video data to a preset server.
In a preferred embodiment of the present disclosure, the processing module includes:
the second obtaining submodule is used for obtaining target video data of the target video;
the synthesis submodule is used for carrying out synthesis processing on the target video data to obtain corresponding synthesized data;
a marking submodule for marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
In a preferred embodiment of the present disclosure, the method further includes:
the fragment module is used for carrying out fragment processing on the synthesized video data and marking the ID and uploading state of each fragment before the step of uploading the video data to the preset server; the upload status includes any one of uploaded and not uploaded;
the processing module comprises:
the determining submodule is used for determining the ID of the fragment which is not uploaded in the uploading state;
the third obtaining submodule is used for obtaining the video data of the uploaded fragments according to the ID of the uploaded fragments;
the sending submodule is used for uploading the video data of the uploaded fragments to a server;
the updating submodule is used for changing the fragmentation state from the non-uploading state to the uploading state when the uploading is successful; and when the uploading fails, the sending submodule is repeatedly called until the uploading is successful, and the fragmentation state is changed from the non-uploading state to the uploading state.
The video processing apparatus of this embodiment can execute the video processing methods shown in the first and second embodiments of the present disclosure, and the implementation principles thereof are similar, and are not described herein again.
In the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
Referring now to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 501 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)502, a Random Access Memory (RAM)503 and a storage device 508 hereinafter, which are specifically shown as follows:
as shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving an uploading instruction aiming at a target video; synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server; and repeating the steps of synthesizing the target video, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to the preset server until the complete synthesized video data of the target video is acquired and uploaded to the preset server.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a video processing method, comprising:
receiving an uploading instruction aiming at a target video;
synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
and repeating the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to the preset server until the synthesized video data of all the target videos are acquired and the synthesized video data are all uploaded to the preset server.
Preferably, the method further comprises the following steps:
when synthesized video data of all target videos are acquired, generating file header data of the target videos;
and uploading the file header data to a preset server.
Preferably, after the synthesizing the target video, the method further includes:
storing the synthesized video data to a preset storage space;
the method comprises the steps of acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server, and comprises the following steps:
acquiring synthesized video data from a storage space;
and uploading the synthesized video data to a preset server.
Preferably, the synthesizing of the target video comprises:
acquiring target video data of a target video;
synthesizing the target video data to obtain corresponding synthesized data, and marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
Preferably, before the step of uploading the video data to the preset server, the method further comprises:
carrying out fragment processing on the synthesized video data, and marking the ID and uploading state of each fragment; the upload status includes any one of uploaded and not uploaded;
the method for uploading the video data to the preset server comprises the following steps:
determining the ID of the fragment which is not uploaded in the uploading state;
acquiring video data of the uploaded fragments according to the IDs of the uploaded fragments;
uploading the video data of the uploaded fragments to a server;
when the uploading is successful, the fragmentation state is changed from the non-uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading failure.
According to one or more embodiments of the present disclosure, [ example two ] there is provided the apparatus of example one, further comprising:
the receiving module is used for receiving an uploading instruction aiming at a target video;
the processing module is used for synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server;
and repeatedly calling the processing module until the synthesized video data of all the target videos are obtained and all the synthesized video data are uploaded to a preset server.
Preferably, the method further comprises the following steps:
the generating module is used for generating file header data of the target video when the synthesized video data of all the target videos are obtained;
and the processing module is also used for uploading the file header data to a preset server.
Preferably, the method further comprises the following steps:
the storage module is used for storing the synthesized video data into a preset storage space after synthesizing the target video;
the processing module comprises:
a first obtaining sub-module, configured to obtain synthesized video data from a storage space;
and the uploading sub-module is used for uploading the synthesized video data to a preset server.
Preferably, the processing module comprises:
the second obtaining submodule is used for obtaining target video data of the target video;
the synthesis submodule is used for carrying out synthesis processing on the target video data to obtain corresponding synthesized data;
a marking submodule for marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
Preferably, the method further comprises the following steps:
the fragment module is used for carrying out fragment processing on the synthesized video data and marking the ID and uploading state of each fragment before the step of uploading the video data to the preset server; the upload status includes any one of uploaded and not uploaded;
the processing module comprises:
the determining submodule is used for determining the ID of the fragment which is not uploaded in the uploading state;
the third obtaining submodule is used for obtaining the video data of the uploaded fragments according to the ID of the uploaded fragments;
the sending submodule is used for uploading the video data of the uploaded fragments to a server;
the updating submodule is used for changing the fragmentation state from the non-uploading state to the uploading state when the uploading is successful; and when the uploading fails, the sending submodule is repeatedly called until the uploading is successful, and the fragmentation state is changed from the non-uploading state to the uploading state.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for processing video, comprising:
receiving an uploading instruction aiming at a target video;
synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
and repeating the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to a preset server until all the synthesized video data of the target video are acquired and all the synthesized video data are uploaded to the preset server.
2. The method for processing video according to claim 1, further comprising:
when the synthesized video data of all the target videos are obtained, generating file header data of the target videos;
and uploading the file header data to the preset server.
3. The method according to claim 1 or 2, wherein after the synthesizing the target video, the method further comprises:
storing the synthesized video data to a preset storage space;
the step of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to a preset server comprises the following steps:
acquiring synthesized video data from the storage space;
and uploading the synthesized video data to the preset server.
4. The method according to claim 1 or 2, wherein the synthesizing the target video comprises:
acquiring target video data of the target video;
synthesizing the target video data to obtain corresponding synthesized video data, and marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
5. The method for processing the video according to claim 3, wherein before the step of uploading the video data to the predetermined server, the method further comprises:
carrying out fragment processing on the synthesized video data, and marking the ID and uploading state of each fragment; the upload status includes any one of uploaded and not uploaded;
the step of uploading the video data to the preset server includes:
determining the ID of the fragment which is not uploaded in the uploading state;
acquiring video data of the uploaded fragments according to the IDs of the uploaded fragments;
uploading the video data of the non-uploaded fragments to the server;
when the uploading is successful, the fragmentation state is changed from the non-uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading.
6. An apparatus for processing video, comprising:
the receiving module is used for receiving an uploading instruction aiming at a target video;
the processing module is used for synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server;
and repeatedly calling the processing module until all the synthesized video data of the target video are obtained and all the synthesized video data are uploaded to the preset server.
7. The apparatus for processing video according to claim 6, further comprising:
the generating module is used for generating file header data of the target video when the synthesized video data of all the target videos are obtained;
the processing module is further configured to upload the header data to the preset server.
8. The apparatus for processing video according to claim 6 or 7, further comprising:
the storage module is used for storing the synthesized video data into a preset storage space after the target video is synthesized;
the processing module comprises:
a first obtaining sub-module, configured to obtain synthesized video data from the storage space;
and the uploading sub-module is used for uploading the synthesized video data to the preset server.
9. An electronic device, comprising:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is used for executing the video processing method of any one of the claims 1 to 5 by calling the operation instruction.
10. A computer-readable storage medium for storing computer instructions which, when executed on a computer, cause the computer to perform the method of processing video of any of claims 1-5.
CN202010042959.7A 2020-01-15 2020-01-15 Video processing method and device, electronic equipment and computer readable storage medium Active CN111263220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010042959.7A CN111263220B (en) 2020-01-15 2020-01-15 Video processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010042959.7A CN111263220B (en) 2020-01-15 2020-01-15 Video processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111263220A true CN111263220A (en) 2020-06-09
CN111263220B CN111263220B (en) 2022-03-25

Family

ID=70948911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010042959.7A Active CN111263220B (en) 2020-01-15 2020-01-15 Video processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111263220B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784150A (en) * 2021-09-14 2021-12-10 广州市网星信息技术有限公司 Video data distribution method and device, electronic equipment and storage medium
CN111263220B (en) * 2020-01-15 2022-03-25 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN115643442A (en) * 2022-10-25 2023-01-24 广州市保伦电子有限公司 Audio and video converging recording and playing method, device, equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11298871A (en) * 1998-04-07 1999-10-29 Nec Software Chugoku Ltd System and method for controlling multi-point video conference, recording medium storing voice data control program and recording medium storing video data control program
US20040125208A1 (en) * 2002-09-30 2004-07-01 Malone Michael F. Forensic communication apparatus and method
CN1696923A (en) * 2004-05-10 2005-11-16 北京大学 Networked, multimedia synchronous composed storage and issuance system, and method for implementing the system
US20130169736A1 (en) * 2011-12-30 2013-07-04 Microsoft Corporation Making Calls Using an Additional Terminal
US20130216206A1 (en) * 2010-03-08 2013-08-22 Vumanity Media, Inc. Generation of Composited Video Programming
CN103458271A (en) * 2012-05-29 2013-12-18 北京数码视讯科技股份有限公司 Audio-video file splicing method and audio-video file splicing device
CN104811646A (en) * 2015-05-15 2015-07-29 电子科技大学 Multi-video streaming data concurrent modulation and buffer storage method based on continuous storage model
CN106227002A (en) * 2016-09-21 2016-12-14 中山新诺科技股份有限公司 A kind of method improving the efficiency adjusting splicing and multiplying power size
CN106328172A (en) * 2015-06-30 2017-01-11 四川效率源信息安全技术有限责任公司 Loosafe embedded security protection apparatus-based data analysis and extraction method
WO2017048326A1 (en) * 2015-09-18 2017-03-23 Furment Odile Aimee System and method for simultaneous capture of two video streams
CN106792245A (en) * 2016-11-22 2017-05-31 广州华多网络科技有限公司 Direct broadcasting room video flowing synthetic method, device and terminal device
CN109195012A (en) * 2018-11-07 2019-01-11 成都索贝数码科技股份有限公司 A method of MP4 file is combined into based on object storage fragment transcoding/synthesis sudden strain of a muscle
CN109640082A (en) * 2018-10-26 2019-04-16 西安科锐盛创新科技有限公司 Audio/video multimedia data processing method and its equipment
CN110536077A (en) * 2018-05-25 2019-12-03 杭州海康威视系统技术有限公司 A kind of Video Composition and playback method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263220B (en) * 2020-01-15 2022-03-25 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11298871A (en) * 1998-04-07 1999-10-29 Nec Software Chugoku Ltd System and method for controlling multi-point video conference, recording medium storing voice data control program and recording medium storing video data control program
US20040125208A1 (en) * 2002-09-30 2004-07-01 Malone Michael F. Forensic communication apparatus and method
CN1696923A (en) * 2004-05-10 2005-11-16 北京大学 Networked, multimedia synchronous composed storage and issuance system, and method for implementing the system
US20130216206A1 (en) * 2010-03-08 2013-08-22 Vumanity Media, Inc. Generation of Composited Video Programming
EP2611122B1 (en) * 2011-12-30 2017-10-11 Skype Making calls using an additional terminal
US20130169736A1 (en) * 2011-12-30 2013-07-04 Microsoft Corporation Making Calls Using an Additional Terminal
CN103458271A (en) * 2012-05-29 2013-12-18 北京数码视讯科技股份有限公司 Audio-video file splicing method and audio-video file splicing device
CN104811646A (en) * 2015-05-15 2015-07-29 电子科技大学 Multi-video streaming data concurrent modulation and buffer storage method based on continuous storage model
CN106328172A (en) * 2015-06-30 2017-01-11 四川效率源信息安全技术有限责任公司 Loosafe embedded security protection apparatus-based data analysis and extraction method
WO2017048326A1 (en) * 2015-09-18 2017-03-23 Furment Odile Aimee System and method for simultaneous capture of two video streams
CN106227002A (en) * 2016-09-21 2016-12-14 中山新诺科技股份有限公司 A kind of method improving the efficiency adjusting splicing and multiplying power size
CN106792245A (en) * 2016-11-22 2017-05-31 广州华多网络科技有限公司 Direct broadcasting room video flowing synthetic method, device and terminal device
CN110536077A (en) * 2018-05-25 2019-12-03 杭州海康威视系统技术有限公司 A kind of Video Composition and playback method, device and equipment
CN109640082A (en) * 2018-10-26 2019-04-16 西安科锐盛创新科技有限公司 Audio/video multimedia data processing method and its equipment
CN109195012A (en) * 2018-11-07 2019-01-11 成都索贝数码科技股份有限公司 A method of MP4 file is combined into based on object storage fragment transcoding/synthesis sudden strain of a muscle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱秀昌,唐贵进: "《IP网络视频传输》", 30 September 2017, 人民邮电出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111263220B (en) * 2020-01-15 2022-03-25 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113784150A (en) * 2021-09-14 2021-12-10 广州市网星信息技术有限公司 Video data distribution method and device, electronic equipment and storage medium
CN115643442A (en) * 2022-10-25 2023-01-24 广州市保伦电子有限公司 Audio and video converging recording and playing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111263220B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN105338424B (en) A kind of method for processing video frequency and system
CN111263220B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN110809189B (en) Video playing method and device, electronic equipment and computer readable medium
CN113395353B (en) File downloading method and device, storage medium and electronic equipment
CN111629252A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN111368519B (en) Method, device, equipment and storage medium for editing online form
CN113542902B (en) Video processing method and device, electronic equipment and storage medium
CN112383787B (en) Live broadcast room creating method and device, electronic equipment and storage medium
CN114205665B (en) Information processing method, device, electronic equipment and storage medium
CN111459364B (en) Icon updating method and device and electronic equipment
CN110781150A (en) Data transmission method and device and electronic equipment
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
CN111314439A (en) Data sending method and device and electronic equipment
CN110806846A (en) Screen sharing method, screen sharing device, mobile terminal and storage medium
CN110837333A (en) Method, device, terminal and storage medium for adjusting playing progress of multimedia file
CN112257478A (en) Code scanning method, device, terminal and storage medium
CN110719407A (en) Picture beautifying method, device, equipment and storage medium
CN110996155B (en) Video playing page display method and device, electronic equipment and computer readable medium
CN111756953A (en) Video processing method, device, equipment and computer readable medium
CN115114463A (en) Media content display method and device, electronic equipment and storage medium
CN114979762A (en) Video downloading and transmission method, device, terminal equipment, server and medium
CN109933556B (en) Method and apparatus for processing information
CN109889737B (en) Method and apparatus for generating video
CN111385638B (en) Video processing method and device
CN111314021A (en) Data transmission method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.