CN110602122A - Video processing method and device, electronic equipment and storage medium - Google Patents
Video processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110602122A CN110602122A CN201910894816.6A CN201910894816A CN110602122A CN 110602122 A CN110602122 A CN 110602122A CN 201910894816 A CN201910894816 A CN 201910894816A CN 110602122 A CN110602122 A CN 110602122A
- Authority
- CN
- China
- Prior art keywords
- video
- fragment
- uploading
- fragments
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/70—Media network packetisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/146—Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The present disclosure relates to a video processing method, an apparatus, an electronic device and a storage medium, and relates to the technical field of multimedia information processing, wherein the video processing method comprises: coding a video to be processed, and generating a plurality of video fragments one by one according to the coding sequence; after a video fragment is obtained and a fragment identifier is distributed to the video fragment, the video fragment and a corresponding fragment identifier are uploaded to a server, so that the server decodes the video fragment according to the fragment identifier of the video fragment after receiving the video fragment; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments; in the process of uploading each video fragment, if the encoding for the to-be-processed video is not finished, the to-be-processed video is continuously encoded to obtain the next video fragment. By the method, most of the time for uploading and decoding at the server is saved, so that the overall time consumption is obviously reduced.
Description
Technical Field
The present disclosure relates to the field of multimedia information processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
The video uploading is the most core function of the short video sharing software, the time consumption of video uploading is shortened, the delay of the uploading to the user is shortened, and the content production experience of an author is greatly improved. The time consumed by video uploading and sharing is limited by a plurality of processes such as the transcoding speed of the terminal video, the size of the file after transcoding, the uplink network bandwidth, the time consumed by transcoding of the server side and the like, and a large optimization space exists.
The short video sharing generally comprises five steps of shooting and editing works by a user, transcoding a video file terminal, uploading the video file on a network, transcoding a server side and distributing the video file. Except that the user shoots and edits works, the terminal transcoding and uploading both need the user to wait for completion, otherwise, the uploading failure caused by limited operation of software at a server or system killing may occur, the server transcoding can be performed asynchronously, and the software is distributed through a Content Delivery Network (CDN) after the completion. Generally, after a user finishes editing through a terminal, an original file needs to be transcoded and compressed through an editing module to generate a video file which is small in size and convenient to upload, and then a network interface is called to upload the file to a server. The server receives the file, calls transcoding service to further process the video (such as further compressing the size, converting the multi-path file with different code rates, and the like) and then distributes the video.
In the related art, the transcoding and uploading speed is optimized, but the inventor finds that the transcoding and transmission speed optimization in the industry reaches relative bottlenecks and the optimization space is limited.
Disclosure of Invention
The invention provides a video processing method, a video processing device, electronic equipment and a storage medium, which are used for at least solving the problem that the optimization of transcoding and transmission speeds in the related technologies has reached a relative bottleneck, and provides the video processing method which enables a video file to be highly uploaded in parallel in three stages of client-side transcoding, uploading and server-side transcoding in a video fragment mode.
According to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
coding a video to be processed, and generating a plurality of video fragments one by one according to the coding sequence;
after a video fragment is obtained and a fragment identifier is distributed to the video fragment, the video fragment and a corresponding fragment identifier are uploaded to a server, so that the server decodes the video fragment according to the fragment identifier of the video fragment after receiving the video fragment; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments;
in the process of uploading each video fragment, if the encoding for the to-be-processed video is not finished, the to-be-processed video is continuously encoded to obtain the next video fragment.
In one embodiment, after the generating the plurality of video slices one by one in the encoding precedence order, the method further includes:
outputting the video fragments generated one by one to a file, and recording the offset of each video fragment in the file;
uploading the video fragment and the fragment identifier corresponding to the video fragment to a server, comprising:
for each video fragment, reading data corresponding to the video fragment from the file according to the recorded offset of the video fragment;
and uploading the data of the video fragment and the fragment identifier to the server.
In one embodiment, after the generating the plurality of video slices one by one in the encoding precedence order, the method further includes:
after each video fragment is obtained, respectively establishing a corresponding video fragment file for each video fragment, and recording the storage address of each video fragment file;
uploading the video fragment and the fragment identifier corresponding to the video fragment to a server, comprising:
reading the data of each video fragment according to the storage address of the video fragment file of each video fragment;
and uploading each video fragment and the corresponding fragment identifier to the server.
In an embodiment, the uploading the video segment and the segment identifier corresponding to the video segment to a server includes:
when each video fragment is obtained, an uploading task of the video fragment is created;
and when a plurality of uploading tasks exist, simultaneously uploading the fragments corresponding to the uploading tasks to the server.
In one embodiment, after uploading each video slice to the server, the method further comprises:
and if the uploading of each video fragment of the video is determined to be finished, outputting a notification of finishing the uploading of the video.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
receiving each video fragment of a video to be processed uploaded by a terminal and a fragment identifier corresponding to each video fragment;
decoding each video fragment according to the fragment identifier;
in the process of decoding each video fragment, if uploading of a next video fragment exists, receiving the next video fragment at the same time.
In an embodiment, the receiving of the video segments uploaded by the terminal and the segment identifier of each video segment includes:
storing the video fragments uploaded by the terminal into a receiving queue;
sorting the video fragments in the receiving queue according to the fragment identification;
the decoding of each video slice according to the slice identifier includes:
and decoding the video slices in sequence according to the slice identification.
In one embodiment, after the sequentially decoding the video slices, the method further includes:
and sequencing the video frames obtained by decoding the video fragments according to the fragment identifications.
According to a third aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the first coding module is configured to code the video to be processed and generate a plurality of video fragments one by one according to the coding sequence;
the uploading module is configured to upload the video fragments and the corresponding fragment identifiers thereof to a server after a fragment identifier is allocated to each video fragment, so that the server decodes the video fragments according to the fragment identifiers of the video fragments after receiving the video fragments; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments;
and the second coding module is configured to, in the process of uploading each video fragment, if the coding for the to-be-processed video is not finished, continue coding the to-be-processed video to obtain a next video fragment.
In one embodiment, after the first encoding module is configured to perform the generating of the plurality of video slices one by one in encoding precedence order, the apparatus further comprises:
the output module is configured to output the video fragments generated one by one to a file and record the offset of each video fragment in the file;
the upload module configured to perform:
for each video fragment, reading data corresponding to the video fragment from the file according to the recorded offset of the video fragment;
and uploading the data of the video fragment and the fragment identifier to the server.
In one embodiment, after the first encoding module is configured to perform the generating of the plurality of video slices one by one in encoding precedence order, the apparatus further comprises:
the file establishing module is configured to respectively establish corresponding video fragment files for each video fragment after each video fragment is obtained, and record the storage address of each video fragment file;
the upload module configured to perform:
reading the data of each video fragment according to the storage address of the video fragment file of each video fragment;
and uploading each video fragment and the corresponding fragment identifier to the server.
In one embodiment, the upload module is configured to perform:
when each video fragment is obtained, an uploading task of the video fragment is created;
and when a plurality of uploading tasks exist, simultaneously uploading the fragments corresponding to the uploading tasks to the server.
In one embodiment, after the uploading module is configured to perform uploading each video slice to the server, the apparatus further includes:
and the notification module is configured to execute a notification that the video uploading is completed if the fact that the uploading of the video fragments of the video is completed is determined.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the first receiving module is configured to execute each video fragment of the to-be-processed video uploaded by the receiving terminal and fragment identification corresponding to each video fragment;
the decoding module is configured to decode each video fragment according to the fragment identifier;
and the second receiving module is configured to execute the process of decoding each video fragment, and if the uploading of the next video fragment exists, simultaneously receive the next video fragment.
In one embodiment, the first receiving module is configured to perform:
storing the video fragments uploaded by the terminal into a receiving queue;
sorting the video fragments in the receiving queue according to the fragment identification;
the decoding module configured to perform:
and decoding the video slices in sequence according to the slice identification.
In one embodiment, after the first receiving module is configured to perform the sequentially decoding of the video slice, the apparatus further comprises:
and the sequencing module is configured to perform sequencing on the video frames obtained by decoding the video slices according to the slice identifiers.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first and second aspects.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer storage medium storing computer-executable instructions for performing the method according to the first and second aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the utility model relates to a video processing method, a device, an electronic device and a storage medium, which mainly realize the parallel uploading of three stages of terminal coding, uploading and server coding, wherein the implementation process comprises that firstly, the terminal codes the video to be processed and generates a plurality of video fragments one by one according to the coding sequence; after a video fragment is obtained and a fragment identifier is distributed to the video fragment, the video fragment and a corresponding fragment identifier are uploaded to a server, so that the server decodes the video fragment according to the fragment identifier of the video fragment after receiving the video fragment; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments; in the process of uploading each video fragment, if the encoding for the to-be-processed video is not finished, the to-be-processed video is continuously encoded to obtain the next video fragment.
Secondly, the server receives each video fragment of the video to be processed uploaded by the terminal and a fragment identifier corresponding to each video fragment; decoding each video fragment according to the fragment identifier; in the process of decoding each video fragment, if uploading of a next video fragment exists, receiving the next video fragment at the same time. By the method, most of the time of the uploading stage and the decoding stage of the server is saved by the uploading mode of the video fragments generated by encoding the video file to be processed, so that the overall time consumption is obviously reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a structural diagram of a terminal device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video serial upload method according to an embodiment of the present disclosure;
fig. 3 is a timing diagram of a video serial upload method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a terminal side of a video processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a server side of a video processing method according to an embodiment of the present disclosure;
fig. 6 is a timing diagram of a video processing method according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of a terminal side of a video processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a server side of a video processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
The terminal device according to the embodiment of the present application may also be referred to as a User Equipment (UE). The terminal equipment can be a smart phone, a tablet personal computer, various wearable equipment, vehicle-mounted equipment and the like. Various applications such as WeChat, maps, etc. may be installed in the terminal device.
The embodiments of the present application relate to a plurality of numbers greater than or equal to two.
The embodiment of the application provides a video processing method and terminal equipment, and the method is suitable for the terminal equipment. Fig. 1 shows a block diagram of a possible terminal device. Referring to fig. 1, the terminal device 100 includes: a Radio Frequency (RF) circuit 110, a power supply 120, a processor 130, a memory 140, an input unit 150, a display unit 160, a camera 170, a communication interface 180, and a Wireless Fidelity (WiFi) module 190. Those skilled in the art will appreciate that the structure of the terminal device shown in fig. 1 does not constitute a limitation of the terminal device, and the terminal device provided in the embodiments of the present application may include more or less components than those shown, or may combine some components, or may be arranged in different components.
The following describes each component of the terminal device 100 in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting data during a communication or conversation. Specifically, the RF circuit 110 sends the downlink data of the base station to the processor 130 for processing after receiving the downlink data; and in addition, sending the uplink data to be sent to the base station. Generally, the RF circuit 110 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The WiFi technology belongs to a short-distance wireless transmission technology, and the terminal device 100 may connect to an Access Point (AP) through a WiFi module 190, so as to implement Access to a data network. The WiFi module 190 may be used for receiving and transmitting data during communication.
The terminal device 100 may be physically connected to other devices through the communication interface 180. In one embodiment, the communication interface 180 is connected to the communication interface of the other device through a cable, so as to realize data transmission between the terminal device 100 and the other device.
In the embodiment of the present application, the terminal device 100 can implement a communication service to send information to other contacts, so that the terminal device 100 needs to have a data transmission function, that is, the terminal device 100 needs to include a communication module inside. Although fig. 1 shows communication modules such as the RF circuit 110, the WiFi module 190, and the communication interface 180, it is understood that at least one of the above components or other communication modules (such as a bluetooth module) for realizing communication exists in the terminal device 100 for data transmission.
For example, when the terminal device 100 is a mobile phone, the terminal device 100 may include the RF circuit 110 and may further include the WiFi module 190; when the terminal device 100 is a computer, the terminal device 100 may include the communication interface 180 and may further include the WiFi module 190; when the terminal device 100 is a tablet computer, the terminal device 100 may include the WiFi module.
The memory 140 may be used to store software programs and modules. The processor 130 executes various functional applications and data processing of the terminal device 100 by executing software programs and modules stored in the memory 140.
In one embodiment, the memory 140 may mainly include a program storage area and a data storage area. The storage program area can store an operating system, various application programs (such as communication application), a face recognition module and the like; the storage data area may store data (such as various multimedia files like pictures, video files, etc., and face information templates) created according to the use of the terminal device, and the like.
Further, the memory 140 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 150 may be used to receive numeric or character information input by a user and generate key signal inputs related to user settings and function control of the terminal device 100.
In one embodiment, the input unit 150 may include a touch panel 151 and other input devices 152.
The touch panel 151, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 151 (for example, an operation performed by the user on or near the touch panel 151 using any suitable object or accessory such as a finger, a stylus, etc.), and drive a corresponding connection device according to a preset program. In one embodiment, the touch panel 151 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 130, and can receive and execute commands sent by the processor 130. In addition, the touch panel 151 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave.
In one embodiment, the other input devices 152 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 160 may be used to display information input by a user or information provided to a user and various menus of the terminal device 100. The display unit 160 is a display system of the terminal device 100, and is used for presenting an interface to implement human-computer interaction.
The display unit 160 may include a display panel 161. In one embodiment, the Display panel 161 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Further, the touch panel 151 may cover the display panel 161, and when the touch panel 151 detects a touch operation on or near the touch panel, the touch panel transmits the touch operation to the processor 130 to determine the type of the touch event, and then the processor 130 provides a corresponding visual output on the display panel 161 according to the type of the touch event.
Although the touch panel 151 and the display panel 161 are shown in fig. 1 as two separate components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 151 and the display panel 161 may be integrated to implement the input and output functions of the terminal device 100.
The processor 130 is a control center of the terminal device 100, connects various components using various interfaces and lines, and executes various functions and processes data of the terminal device 100 by running or executing software programs and/or modules stored in the memory 140 and calling data stored in the memory 140, thereby implementing various services based on the terminal device.
In one embodiment, the processor 130 may include one or more processing units. In one embodiment, the processor 130 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 130.
The camera 170 is configured to implement a shooting function of the terminal device 100, and shoot pictures or videos. The camera 170 may also be used to implement a scanning function of the terminal device 100, and scan a scanned object (two-dimensional code/barcode).
The terminal device 100 further comprises a power supply 120, such as a battery, for powering the various components. In one embodiment, the power supply 120 may be logically connected to the processor 130 through a power management system, so as to manage charging, discharging, and power consumption functions through the power management system.
Although not shown, the terminal device 100 may further include at least one sensor, an audio circuit, and the like, which are not described in detail herein.
Referring to fig. 2, short video sharing generally includes five steps of user shooting and editing a work (step 201), video file terminal transcoding (step 202), video file uploading (step 203), server transcoding (step 204), and video file distribution (step 205). The three steps of terminal transcoding, video file uploading and server transcoding are background processing processes and can be optimized.
At present, the performance optimization of each stage of terminal transcoding, video file uploading and server side transcoding generally achieved in the related technology reaches the relative bottleneck, and the optimization space is limited. Referring to fig. 3, a timing diagram of a short video uploading method, the video processing flow of fig. 3 may include the following steps 301 to 303.
Step 301: and (6) transcoding by the terminal.
After the user shoots and edits the work, the transcoding at the terminal is performed first, for example, in fig. 3, the terminal sequential transcoding is performed on the video file to be processed, and video slices CF1, CF2, CF3, CF4 and CF5 are generated. When all video slices are obtained, step 302 is executed.
Step 302: and uploading the file.
When all the transcoded video fragments are obtained, the video fragments transcoded to the video files to be processed are uploaded integrally, for example, in fig. 3, the video fragments UF1, UF2, UF3, UF4 and UF5 are all obtained. And when all the video fragments of the video file to be processed are uploaded, continuing to execute the step 303.
Step 303: and transcoding by the server side.
When the server receives all video slices of the video file to be processed, the server starts to transcode all video slices of the video file to be processed further, for example, each video slice of SF1, SF2, SF3, SF4 and SF5 in fig. 3. And when all video fragments of the video file to be processed are transcoded at the server, outputting the complete video file and continuing to perform subsequent distribution processing.
In view of this, the present disclosure provides a video processing method, which can implement parallel processing on video slices of a video file in the above three steps, and save time for successful release of a user's work. The present disclosure explains the method separately from the terminal side and the server side, and first the implementation on the terminal side is as shown in fig. 4, the method includes:
step 401: coding a video to be processed, and generating a plurality of video fragments one by one according to the coding sequence;
among them, in the field of Streaming media transmission, such as DASH (Dynamic Adaptive Streaming over HTTP) and HLS (HTTP Live Streaming over HTTP) protocols, which are commonly used for video on demand, a format of video storage commonly used for short video is fMP4(Fragmented MP4, a file format). Different from the ordinary MP4 file, fMP4 allows video files to be organized in video clips, and can be played in units of video clips without waiting for the completion of the processing of the entire file, thereby solving the problem that MP4 cannot stream and play. At the same time fMP4 also provides the possibility for parallel uploading of short videos.
Step 402: after a video fragment is obtained and a fragment identifier is distributed to the video fragment, the video fragment and a corresponding fragment identifier are uploaded to a server, so that the server decodes the video fragment according to the fragment identifier of the video fragment after receiving the video fragment; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments;
in one embodiment, the encoding performed by the terminal side (i.e. terminal transcoding) and the uploading of the video can be performed by two independent modules of the terminal respectively. The modules can communicate with each other. After a user finishes video editing, a terminal firstly encodes a video to be processed and generates a plurality of video fragments one by one according to the encoding sequence; and outputting the information of the obtained video fragment to an uploading module after each video fragment is obtained, and immediately starting uploading the video fragment when the uploading module monitors the information. For example, after a first video fragment is generated by encoding in a video file to be processed, information of the first video fragment is output, and when the uploading module detects the information, the uploading of the first video fragment is started immediately.
In addition, when the video fragments are obtained, in order to enable the server to decode according to the sequence of the video fragments, a fragment identifier is allocated to each video fragment, wherein the fragment identifier is used for marking the front-back sequence of generating a plurality of video fragments; and when the terminal receives the video fragments and the fragment identifications corresponding to the video fragments, decoding each video fragment according to the sequence of the identifications in the fragment identifications, thereby outputting the video file.
Step 403: in the process of uploading each video fragment, if the encoding for the to-be-processed video is not finished, the to-be-processed video is continuously encoded to obtain the next video fragment.
In one embodiment, in the process of starting the uploading of the video fragments, the terminal generates the video fragments for the coding of the video to be processed one by one without pause, so that the coding and the uploading of the video to be processed can be performed in parallel for the same video to be processed as a whole; for example, in the process of uploading a first video segment, encoding can be performed in parallel to generate a second video segment, so that the uploading of the first video segment and the encoding of the second video segment are processed in parallel, that is, each encoding obtains a video segment, that is, the uploading of the video segment can be started, and the encoding of the video segments is performed in series one by one, so that the uploading of the video segments generated by the encoding is not affected. In this process, the encoding of the video file to be processed and the uploading of the video slices by the terminal are performed in parallel. It is understood that each video slice is a separate pipeline, and that uploading of a video slice can be initiated after each video slice is encoded. For example, in the process of uploading the first video segment, the terminal encodes the video to be processed to generate a second video segment. In this embodiment, the time that the terminal needs to wait for uploading all the video slices obtained by the whole to-be-processed video coding before uploading the video slices can be saved.
In one embodiment, when a plurality of video slices are generated one by one according to the coding sequence, the video slices generated one by one are output to one file, and the offset of each video slice in the one file is recorded. Therefore, according to the offset of the video fragment in the file, the data corresponding to the video fragment can be read from the file according to the offset of the video fragment; and uploading the data of the video fragment and the fragment identifier to the server.
Or, in another embodiment, after each video fragment is obtained, a corresponding video fragment file is respectively established for each video fragment, and the storage address of each video fragment file is recorded. Therefore, the data of each video fragment can be read according to the storage address of the video fragment file of each video fragment; and uploading each video fragment and the corresponding fragment identifier to the server.
During implementation, uploading data corresponding to the video fragments to the server needs to be achieved through an uploading task. Thus, each time a video clip is obtained, an upload task for the video clip is created. When a plurality of uploading tasks exist, the fragments corresponding to the uploading tasks can be uploaded to the server side at the same time. By the method, the uploading tasks of the plurality of video fragments can be executed in parallel, so that the uploading waiting time of each video fragment is saved.
In implementation, when all the encoding results of the video fragments uploaded by the user are completely uploaded, a notification of the completion of the video uploading is output, and at this time, the uploading process of the video fragments is finished.
On the other hand, as shown in fig. 5, the implementation from the server side includes:
step 501: receiving each video fragment of a video to be processed uploaded by a terminal and a fragment identifier corresponding to each video fragment;
step 502: decoding each video fragment according to the fragment identifier;
step 503: in the process of decoding each video fragment, if uploading of a next video fragment exists, receiving the next video fragment at the same time.
In one embodiment, each time a server receives a video fragment, the server can start decoding the video fragment, but it needs to be noticed that transcoding output is a file, so that the decoding of the server needs to be performed serially, so that a receiving queue for receiving the video fragment needs to be maintained, and sorting is performed according to fragment identifiers allocated to each video fragment when an encoding result is output, wherein the fragment identifiers are used for marking the front and back order of generating a plurality of video fragments; therefore, according to the slice identifier, the video slices can be decoded in sequence. Or, in an embodiment, the video frame information of the video slices can be obtained after decoding, and the decoded video slices can be sequenced according to the video frame information to obtain the whole complete video file.
By the parallel uploading of the video fragments and the decoding processing of the server side at the terminal side and the server side, the waiting time of each fragment when a serial processing mode is executed in the related technology can be saved, and therefore the time for a user to wait for the uploading of works can be saved.
In the following, another embodiment is adopted to further explain a video processing method provided by the present disclosure, and referring to fig. 6, a timing chart of a video processing method provided by an embodiment of the present disclosure is shown.
Fig. 6 includes three main processes of terminal transcoding, uploading, and server transcoding. It can be seen that in the figure, when the terminal transcodes to obtain the first video fragment CF1, the uploading module immediately starts to upload the first video fragment UF1, and in the process of uploading the first video fragment UF1, the terminal transcodes the videos to be processed one by one to obtain the second video fragment CF2, and the transcoding is also performed in parallel. And when the subsequent terminal transcoding stage finishes transcoding the second video fragment CF2, uploading the second video fragment UF2 in the uploading module is started immediately. The transcoding of the terminal is a process of generating a plurality of video fragments one by one for the video to be processed according to the coding sequence.
In one embodiment, in a terminal transcoding stage, encoding a video to be processed to generate video fragments is sequentially executed one by one, but in an uploading stage, each video fragment has different uploading duration due to network, file size and the like, and if uploading of a preorder video fragment is not completed yet, and then transcoding is completed at a terminal to obtain a next video fragment, then multiple transcoded obtained video fragments can be uploaded in parallel. For example, in fig. 6, the uploading of the video slices UF3, UF4, and UF5 is not completed yet for the video slice UF3, but the terminal has completed transcoding to obtain the video slice CF4, and at this time, the uploading of the video slices UF3 and UF4 may be performed in parallel and synchronously in the uploading module. Similarly, the uploading of the video slices UF4 is not completed, but the terminal has transcoded to obtain the video slices CF5, and at this time, the uploading of the video slices UF4 and UF5 can be performed in parallel, which can be implemented by multiple uploading tasks.
In practice, transcoding at the server is performed serially and sequentially, since a complete video file is output. And the transcoding of the server side of the video fragments is started after the uploading of each video fragment is finished. For example, the video slice SF1 of the server in the figure starts transcoding at the server immediately upon receiving the upload of the video slice UF 1. In one embodiment, when transcoding of video slice SF1 is completed and video slice UF2 has been received at this point, transcoding of video slice SF2 continues. And outputting the complete video file and continuing to perform subsequent video file distribution operation until all the video fragments are completely transcoded at the server, for example, the transcoding of the graph video fragment SF5 is finished.
Fig. 7 is a schematic structural diagram of a video processing terminal device according to an embodiment of the present disclosure, where the video processing terminal device includes: a first encoding module 701, an uploading module 702, and a second encoding module 703.
A first encoding module 701 configured to perform encoding on a video to be processed, and generate a plurality of video slices one by one according to an encoding sequence;
an upload module 702 configured to upload the video segments and their corresponding segment identifiers to a server after allocating segment identifiers to the video segments to each obtained video segment, so that the server decodes the video segments according to the segment identifiers of the video segments after receiving the video segments; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments;
the second encoding module 703 is configured to, during the process of uploading each video slice, if encoding of the to-be-processed video is not finished, continue encoding the to-be-processed video at the same time to obtain a next video slice.
In one embodiment, after the first encoding module 701 is configured to perform the generating of the plurality of video slices one by one in encoding precedence order, the apparatus further includes:
the output module is configured to output the video fragments generated one by one to a file and record the offset of each video fragment in the file;
the upload module 702 configured to perform:
for each video fragment, reading data corresponding to the video fragment from the file according to the recorded offset of the video fragment;
and uploading the data of the video fragment and the fragment identifier to the server.
In one embodiment, after the first encoding module 701 is configured to perform the generating of the plurality of video slices one by one in encoding precedence order, the apparatus further includes:
the file establishing module is configured to respectively establish corresponding video fragment files for each video fragment after each video fragment is obtained, and record the storage address of each video fragment file;
the upload module 702 configured to perform:
reading the data of each video fragment according to the storage address of the video fragment file of each video fragment;
and uploading each video fragment and the corresponding fragment identifier to the server.
In one embodiment, the upload module 702 is configured to perform:
when each video fragment is obtained, an uploading task of the video fragment is created;
and when a plurality of uploading tasks exist, simultaneously uploading the fragments corresponding to the uploading tasks to the server.
In one embodiment, after the uploading module 702 is configured to perform uploading each video slice to the server, the apparatus further comprises:
and the notification module is configured to execute a notification that the video uploading is completed if the fact that the uploading of the video fragments of the video is completed is determined.
Referring to fig. 8, a schematic structural diagram of a video processing server device in an embodiment of the present disclosure is shown, where the video processing server device includes: a first receiving module 801, a decoding module 802 and a second receiving module 803.
A first receiving module 801 configured to execute video segments of a to-be-processed video uploaded by a receiving terminal and segment identifiers corresponding to the video segments;
a decoding module 802 configured to perform decoding on each video slice according to the slice identifier;
the second receiving module 803 is configured to perform, in the process of decoding each video slice, if there is uploading of a next video slice, receiving of the next video slice at the same time.
In one embodiment, the first receiving module 801 is configured to perform:
storing the video fragments uploaded by the terminal into a receiving queue;
sorting the video fragments in the receiving queue according to the fragment identification;
the decoding module configured to perform:
and decoding the video slices in sequence according to the slice identification.
In one embodiment, after the first receiving module 801 is configured to perform the sequential decoding of video slices, the apparatus further comprises:
and the sequencing module is configured to perform sequencing on the video frames obtained by decoding the video slices according to the slice identifiers.
Having described the video processing method and apparatus in the exemplary embodiments of the present disclosure, an electronic device of another exemplary embodiment of the present disclosure is next described.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device in accordance with the present disclosure may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the image processing method according to various exemplary embodiments of the present disclosure described above in this specification. For example, the processor may perform steps 401-403 as shown in FIG. 4 and steps 501-503 as shown in FIG. 5.
The electronic device 130 according to this embodiment of the present disclosure is described below with reference to fig. 9. The electronic device 130 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 130 is embodied in the form of a general purpose computing apparatus. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable target objects to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 135. Also, computing device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, the aspects of the video processing method provided by the present disclosure may also be implemented in the form of a program product including program code for causing a computer device to perform the steps in the image processing method according to various exemplary embodiments of the present disclosure described above in this specification when the program product is run on the computer device, for example, the computer device may perform the steps 401 to 403 shown in fig. 4 and the steps 501 to 503 shown in fig. 5.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for video processing of embodiments of the present disclosure may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the target object computing device, partly on the target object apparatus, as a stand-alone software package, partly on the target object computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the target object electronic equipment through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external electronic equipment (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in the particular order shown, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A video processing method, comprising:
coding a video to be processed, and generating a plurality of video fragments one by one according to the coding sequence;
after a video fragment is obtained and a fragment identifier is distributed to the video fragment, the video fragment and a corresponding fragment identifier are uploaded to a server, so that the server decodes the video fragment according to the fragment identifier of the video fragment after receiving the video fragment; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments;
in the process of uploading each video fragment, if the encoding for the to-be-processed video is not finished, the to-be-processed video is continuously encoded to obtain the next video fragment.
2. The method of claim 1, wherein after generating the plurality of video slices one by one in coding order, the method further comprises:
outputting the video fragments generated one by one to a file, and recording the offset of each video fragment in the file;
uploading the video fragment and the fragment identifier corresponding to the video fragment to a server, comprising:
for each video fragment, reading data corresponding to the video fragment from the file according to the recorded offset of the video fragment;
and uploading the data of the video fragment and the fragment identifier to the server.
3. The method of claim 1, wherein uploading the video slices and their corresponding slice identifiers to a server comprises:
when each video fragment is obtained, an uploading task of the video fragment is created;
and when a plurality of uploading tasks exist, simultaneously uploading the fragments corresponding to the uploading tasks to the server.
4. The method according to any of claims 1-3, wherein after uploading each video slice to the server, the method further comprises:
and if the uploading of each video fragment of the video is determined to be finished, outputting a notification of finishing the uploading of the video.
5. A video processing method, comprising:
receiving each video fragment of a video to be processed uploaded by a terminal and a fragment identifier corresponding to each video fragment;
decoding each video fragment according to the fragment identifier;
in the process of decoding each video fragment, if uploading of a next video fragment exists, receiving the next video fragment at the same time.
6. The method according to claim 5, wherein the receiving of each video slice of the video uploaded by the terminal and the slice identifier of each video slice comprises:
storing the video fragments uploaded by the terminal into a receiving queue;
sorting the video fragments in the receiving queue according to the fragment identification;
the decoding of each video slice according to the slice identifier includes:
and decoding the video slices in sequence according to the slice identification.
7. A video processing apparatus, comprising:
the first coding module is configured to code the video to be processed and generate a plurality of video fragments one by one according to the coding sequence;
the uploading module is configured to upload the video fragments and the corresponding fragment identifiers thereof to a server after a fragment identifier is allocated to each video fragment, so that the server decodes the video fragments according to the fragment identifiers of the video fragments after receiving the video fragments; wherein, the fragment mark is used for marking the front and back sequence of generating a plurality of video fragments;
and the second coding module is configured to, in the process of uploading each video fragment, if the coding for the to-be-processed video is not finished, continue coding the to-be-processed video to obtain a next video fragment.
8. A video processing apparatus, comprising:
the first receiving module is configured to execute each video fragment of the to-be-processed video uploaded by the receiving terminal and fragment identification corresponding to each video fragment;
the decoding module is configured to decode each video fragment according to the fragment identifier;
and the second receiving module is configured to execute the process of decoding each video fragment, and if the uploading of the next video fragment exists, simultaneously receive the next video fragment.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
10. A computer storage medium having computer-executable instructions stored thereon for performing the method of any one of claims 1-6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910894816.6A CN110602122A (en) | 2019-09-20 | 2019-09-20 | Video processing method and device, electronic equipment and storage medium |
PCT/CN2020/108268 WO2021052058A1 (en) | 2019-09-20 | 2020-08-10 | Video processing method and apparatus, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910894816.6A CN110602122A (en) | 2019-09-20 | 2019-09-20 | Video processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110602122A true CN110602122A (en) | 2019-12-20 |
Family
ID=68862084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910894816.6A Pending CN110602122A (en) | 2019-09-20 | 2019-09-20 | Video processing method and device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110602122A (en) |
WO (1) | WO2021052058A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112104442A (en) * | 2020-08-31 | 2020-12-18 | 宁波三星医疗电气股份有限公司 | Message reply method of power acquisition terminal |
WO2021052058A1 (en) * | 2019-09-20 | 2021-03-25 | 北京达佳互联信息技术有限公司 | Video processing method and apparatus, electronic device and storage medium |
CN115190352A (en) * | 2022-05-18 | 2022-10-14 | 上海亘岩网络科技有限公司 | Video data storage method and device, computer readable storage medium and electronic equipment |
CN115988241A (en) * | 2022-12-12 | 2023-04-18 | 苏州五指互联网络科技有限公司 | Data transmission method for uploading short video in fragment mode |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220303642A1 (en) * | 2021-03-19 | 2022-09-22 | Product Development Associates, Inc. | Securing video distribution |
CN113132484B (en) * | 2021-04-20 | 2022-10-25 | 北京奇艺世纪科技有限公司 | Data transmission method and device |
CN115225710B (en) * | 2022-06-17 | 2024-06-07 | 中国电信股份有限公司 | Data packet transmission method and device, electronic equipment and storage medium |
CN115589488B (en) * | 2022-09-30 | 2023-09-08 | 摩尔线程智能科技(北京)有限责任公司 | Video transcoding system, method, GPU, electronic device and storage medium |
CN115734008B (en) * | 2022-10-19 | 2024-06-04 | 北京智象信息技术有限公司 | Method, system and medium for rapidly integrating video resources of content provider |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009046354A1 (en) * | 2007-10-03 | 2009-04-09 | Eatlime, Inc. | Methods and apparatus for simultaneous uploading and streaming of media |
CN102098304A (en) * | 2011-01-25 | 2011-06-15 | 北京天纵网联科技有限公司 | Method for simultaneously recording and uploading audio/video of mobile phone |
US20120102154A1 (en) * | 2010-10-26 | 2012-04-26 | Futurewei Technologies, Inc. | Cloud-Based Transcoding Platform Systems and Methods |
CN102447893A (en) * | 2010-09-30 | 2012-05-09 | 北京沃安科技有限公司 | Method and system for real-time acquisition and release of videos of mobile phone |
US20120243602A1 (en) * | 2010-09-23 | 2012-09-27 | Qualcomm Incorporated | Method and apparatus for pipelined slicing for wireless display |
CN103297807A (en) * | 2013-06-21 | 2013-09-11 | 哈尔滨工业大学深圳研究生院 | Hadoop-platform-based method for improving video transcoding efficiency |
CN104427353A (en) * | 2013-09-05 | 2015-03-18 | 北京大学 | Method and equipment for carrying out video transmission |
US20150110202A1 (en) * | 2012-07-05 | 2015-04-23 | Quixel Holdings Limited | Simultaneous Encoding and Sending of a Video Data File |
CN104822079A (en) * | 2014-12-31 | 2015-08-05 | 北京奇艺世纪科技有限公司 | Video file real-time publication method and system |
CN105338424A (en) * | 2015-10-29 | 2016-02-17 | 努比亚技术有限公司 | Video processing method and system |
CN105357593A (en) * | 2015-10-30 | 2016-02-24 | 努比亚技术有限公司 | Method, device and system for uploading video |
CN105681715A (en) * | 2016-03-03 | 2016-06-15 | 腾讯科技(深圳)有限公司 | Audio and video processing method and apparatus |
WO2016184229A1 (en) * | 2015-05-15 | 2016-11-24 | 乐视云计算有限公司 | Method and system for recording live video |
CN107483471A (en) * | 2017-09-05 | 2017-12-15 | 成都索贝数码科技股份有限公司 | A kind of Multimedia Transmission System suitable for strange land cooperation |
US10021159B2 (en) * | 2016-05-25 | 2018-07-10 | Giraffic Technologies Ltd. | Method of stabilized adaptive video streaming for high dynamic range (HDR) |
US20180227602A1 (en) * | 2016-08-31 | 2018-08-09 | Living As One, Llc | System and method for asynchronous uploading of live digital multimedia with guaranteed delivery |
CN108632642A (en) * | 2017-03-16 | 2018-10-09 | 杭州海康威视数字技术股份有限公司 | Streaming Media method for pushing and device |
CN108848384A (en) * | 2018-06-19 | 2018-11-20 | 复旦大学 | A kind of efficient parallel code-transferring method towards multi-core platform |
US20190182495A1 (en) * | 2017-12-12 | 2019-06-13 | Coherent Logix, Incorporated | Low Latency Video Codec and Transmission with Parallel Processing |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101964894B (en) * | 2010-08-24 | 2012-11-14 | 中国科学院深圳先进技术研究院 | Method and system for parallel trans-coding of video slicing |
WO2014041547A1 (en) * | 2012-09-13 | 2014-03-20 | Yevvo Entertainment Inc. | Live video broadcasting from a mobile device |
CN105611429B (en) * | 2016-02-04 | 2019-03-15 | 北京金山安全软件有限公司 | Video file backup method and device and electronic equipment |
CN110602122A (en) * | 2019-09-20 | 2019-12-20 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and storage medium |
-
2019
- 2019-09-20 CN CN201910894816.6A patent/CN110602122A/en active Pending
-
2020
- 2020-08-10 WO PCT/CN2020/108268 patent/WO2021052058A1/en active Application Filing
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009046354A1 (en) * | 2007-10-03 | 2009-04-09 | Eatlime, Inc. | Methods and apparatus for simultaneous uploading and streaming of media |
US20120243602A1 (en) * | 2010-09-23 | 2012-09-27 | Qualcomm Incorporated | Method and apparatus for pipelined slicing for wireless display |
CN102447893A (en) * | 2010-09-30 | 2012-05-09 | 北京沃安科技有限公司 | Method and system for real-time acquisition and release of videos of mobile phone |
US20120102154A1 (en) * | 2010-10-26 | 2012-04-26 | Futurewei Technologies, Inc. | Cloud-Based Transcoding Platform Systems and Methods |
CN102098304A (en) * | 2011-01-25 | 2011-06-15 | 北京天纵网联科技有限公司 | Method for simultaneously recording and uploading audio/video of mobile phone |
US20150110202A1 (en) * | 2012-07-05 | 2015-04-23 | Quixel Holdings Limited | Simultaneous Encoding and Sending of a Video Data File |
CN103297807A (en) * | 2013-06-21 | 2013-09-11 | 哈尔滨工业大学深圳研究生院 | Hadoop-platform-based method for improving video transcoding efficiency |
CN104427353A (en) * | 2013-09-05 | 2015-03-18 | 北京大学 | Method and equipment for carrying out video transmission |
CN104822079A (en) * | 2014-12-31 | 2015-08-05 | 北京奇艺世纪科技有限公司 | Video file real-time publication method and system |
WO2016184229A1 (en) * | 2015-05-15 | 2016-11-24 | 乐视云计算有限公司 | Method and system for recording live video |
CN105338424A (en) * | 2015-10-29 | 2016-02-17 | 努比亚技术有限公司 | Video processing method and system |
CN105357593A (en) * | 2015-10-30 | 2016-02-24 | 努比亚技术有限公司 | Method, device and system for uploading video |
CN105681715A (en) * | 2016-03-03 | 2016-06-15 | 腾讯科技(深圳)有限公司 | Audio and video processing method and apparatus |
US10021159B2 (en) * | 2016-05-25 | 2018-07-10 | Giraffic Technologies Ltd. | Method of stabilized adaptive video streaming for high dynamic range (HDR) |
US20180227602A1 (en) * | 2016-08-31 | 2018-08-09 | Living As One, Llc | System and method for asynchronous uploading of live digital multimedia with guaranteed delivery |
CN108632642A (en) * | 2017-03-16 | 2018-10-09 | 杭州海康威视数字技术股份有限公司 | Streaming Media method for pushing and device |
CN107483471A (en) * | 2017-09-05 | 2017-12-15 | 成都索贝数码科技股份有限公司 | A kind of Multimedia Transmission System suitable for strange land cooperation |
US20190182495A1 (en) * | 2017-12-12 | 2019-06-13 | Coherent Logix, Incorporated | Low Latency Video Codec and Transmission with Parallel Processing |
CN108848384A (en) * | 2018-06-19 | 2018-11-20 | 复旦大学 | A kind of efficient parallel code-transferring method towards multi-core platform |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021052058A1 (en) * | 2019-09-20 | 2021-03-25 | 北京达佳互联信息技术有限公司 | Video processing method and apparatus, electronic device and storage medium |
CN112104442A (en) * | 2020-08-31 | 2020-12-18 | 宁波三星医疗电气股份有限公司 | Message reply method of power acquisition terminal |
CN112104442B (en) * | 2020-08-31 | 2023-12-05 | 宁波三星医疗电气股份有限公司 | Message reply method of electric power acquisition terminal |
CN115190352A (en) * | 2022-05-18 | 2022-10-14 | 上海亘岩网络科技有限公司 | Video data storage method and device, computer readable storage medium and electronic equipment |
CN115988241A (en) * | 2022-12-12 | 2023-04-18 | 苏州五指互联网络科技有限公司 | Data transmission method for uploading short video in fragment mode |
Also Published As
Publication number | Publication date |
---|---|
WO2021052058A1 (en) | 2021-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110602122A (en) | Video processing method and device, electronic equipment and storage medium | |
US11355130B2 (en) | Audio coding and decoding methods and devices, and audio coding and decoding system | |
CN103475939B (en) | A kind of process plays method, device and the server recorded | |
CN107454416B (en) | Video stream sending method and device | |
US11392285B2 (en) | Method and a system for performing scrubbing in a video stream | |
US11202066B2 (en) | Video data encoding and decoding method, device, and system, and storage medium | |
CN112104897B (en) | Video acquisition method, terminal and storage medium | |
CN105144673A (en) | Reduced latency server-mediated audio-video communication | |
CN106998485B (en) | Video live broadcasting method and device | |
US10893275B2 (en) | Video coding method, device, device and storage medium | |
CN108900855B (en) | Live content recording method and device, computer readable storage medium and server | |
CN110753098A (en) | Download request execution method and device, server and storage medium | |
JP2017526311A (en) | Cloud streaming service system, data compression method for preventing memory bottleneck, and apparatus therefor | |
CN108337533B (en) | Video compression method and device | |
CN112423140A (en) | Video playing method and device, electronic equipment and storage medium | |
CN111917813A (en) | Communication method, device, equipment, system and storage medium | |
CN104753811B (en) | A kind of streaming media service optimization method, equipment and system | |
CN109995743A (en) | A kind of processing method and terminal of multimedia file | |
CN112565923B (en) | Audio and video stream processing method and device, electronic equipment and storage medium | |
CN104853193A (en) | Video compression method, device and electronic equipment | |
CN112823519B (en) | Video decoding method, device, electronic equipment and computer readable storage medium | |
CN104796730A (en) | Method and device for detecting low-speed users in network video live broadcast and system | |
CN110636332A (en) | Video processing method and device and computer readable storage medium | |
CN108810596B (en) | Video editing method and device and terminal | |
WO2018223793A1 (en) | Method, device, and system for transmitting webpage images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |