CN112203116A - Video generation method, video playing method and related equipment - Google Patents
Video generation method, video playing method and related equipment Download PDFInfo
- Publication number
- CN112203116A CN112203116A CN201910608929.5A CN201910608929A CN112203116A CN 112203116 A CN112203116 A CN 112203116A CN 201910608929 A CN201910608929 A CN 201910608929A CN 112203116 A CN112203116 A CN 112203116A
- Authority
- CN
- China
- Prior art keywords
- video
- address information
- client
- plug
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000000694 effects Effects 0.000 claims description 24
- 230000004044 response Effects 0.000 claims description 8
- 230000006870 function Effects 0.000 description 17
- 238000012545 processing Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005538 encapsulation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/64—Addressing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/654—Transmission by server directed to the client
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses a video generation method, a playing method and related equipment, wherein the video generation method is applied to a client local proxy server, the client local proxy server is configured at a video playing client, and the method comprises the following steps: receiving first address information and second address information returned by a video server aiming at a playing request of a target video sent by a video playing client, wherein the first address information and the second address information are respectively address information of an original video file matched with the target video and address information of a plug-in audio track file; acquiring video data in an original video file according to the first address information; acquiring the external audio data in the external audio track file according to the second address information; and generating a target video file according to the video data and the plug-in audio data so that the video playing client can acquire and play the target video file. The invention solves the problems of compatibility and difficult audio and video synchronization of multi-instance players.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video generation method, a video playing method, and a related device.
Background
At present, in order to improve the experience of watching videos at a terminal, a plug-in audio track file with an audio effect enhancement effect is usually provided for the videos. In the related art, a terminal generally plays an audio and an original video of an external hanging audio track file through two players, and plays the audio and the original video in a mute manner when the original video is played, so that an effect of playing the external hanging audio track is achieved. According to the playing scheme, at least two players need to be configured on the terminal, the multi-instance player has some compatibility problems, and the problem that audio and video cannot be synchronized exists when the plurality of players play audio and video respectively.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a video generating method, a video playing method, and related devices. The technical scheme is as follows:
in one aspect, a video generation method is provided and applied to a client-side local proxy server, where the client-side local proxy server is configured at a video playing client, and the method includes:
receiving first address information and second address information returned by a video server aiming at a playing request of a target video sent by a video playing client, wherein the first address information is address information of an original video file matched with the target video, and the second address information is address information of a plug-in audio track file matched with the target video;
acquiring video data in the original video file according to the first address information;
acquiring the external audio data in the external audio track file according to the second address information;
and generating a target video file according to the video data and the plug-in audio data so that the video playing client can acquire and play the target video file.
In one aspect, a video playing method is provided, which is applied to a video playing client configured with a client local proxy server, and the method includes:
sending a playing request of a target video to a video server so that the video server determines first address information of an original video file matched with the target video and second address information of an add-on audio track file according to the playing request;
receiving the first address information and the second address information returned by the video server by utilizing the client local proxy server; acquiring video data in the original video file according to the first address information; acquiring the external audio data in the external audio track file according to the second address information; generating a target video file according to the video data and the plug-in audio data;
and acquiring the target video file from the local proxy server of the client side, and playing the target video file.
In another aspect, a video generating apparatus is provided, which is applied to a client-side local proxy server configured at a video playing client, and includes:
the first receiving module is used for receiving first address information and second address information returned by a video server aiming at a playing request of a target video sent by a video playing client, wherein the first address information is address information of an original video file matched with the target video, and the second address information is address information of an add-on audio track file matched with the target video;
the first obtaining module is used for obtaining the video data in the original video file according to the first address information;
the second acquisition module is used for acquiring the plug-in audio data in the plug-in audio track file according to the second address information;
and the generating module is used for generating a target video file according to the video data and the plug-in audio data so as to enable the video playing client to acquire and play the target video file.
Optionally, the first obtaining module includes:
a second sending module, configured to send a first content obtaining request to a content distribution network, where the first content obtaining request carries the first address information, so that the content distribution network determines an original video file matched with the first address information;
the second receiving module is used for receiving the original video file returned by the content distribution network;
and the first decapsulation module is used for decapsulating the original video file to obtain video data and audio data.
Optionally, the apparatus further comprises:
the judging module is used for judging whether the sound effect level of the plug-in audio data is higher than the sound effect level of the audio data; and if so, executing the step of generating the target video file according to the video data and the plug-in audio data.
Optionally, the second obtaining module includes:
a third sending module, configured to send a second content obtaining request to a content distribution network, where the second content obtaining request carries the second address information, so that the content distribution network determines a plug-in audio track file matched with the second address information;
the third receiving module is used for receiving the plug-in audio track files returned by the content distribution network;
and the second decapsulation module is used for decapsulating the plug-in audio track file to obtain plug-in audio data.
Optionally, the add-on audio track files include audio track files corresponding to different audio tracks;
correspondingly, the second decapsulation module comprises:
the fourth sending module is used for sending the audio track identifier of the audio track file to the video playing client so that the video playing client can display the audio track identifier;
a track identity determination module for determining a target track identity in response to a selection signal for the track identity;
and the decapsulation submodule is used for decapsulating the target audio track file corresponding to the target audio track identifier to obtain target audio data, and the target audio data is used as the plug-in audio data.
In another aspect, a video playing apparatus is provided, where the video playing apparatus is applied to a video playing client, and the video playing client is configured with a client local proxy server, and the apparatus includes:
the system comprises a first sending module, a second sending module and a third sending module, wherein the first sending module is used for sending a playing request of a target video to a video server so that the video server determines first address information of an original video file matched with the target video and second address information of a plug-in audio track file according to the playing request;
the target video generation module is used for receiving the first address information and the second address information returned by the video server by utilizing the client local proxy server; acquiring video data in the original video file according to the first address information, and acquiring plug-in audio data in the plug-in audio track file according to the second address information; generating a target video file according to the video data and the plug-in audio data;
and the playing module is used for acquiring the target video file from the client local proxy server and playing the target video file.
In another aspect, a terminal is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the above video generation method.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement a video generation method as described above.
According to the embodiment of the invention, the address information returned by the video server is received by the local proxy server at the client, the corresponding video data in the original video file and the plug-in audio data in the plug-in audio track file are obtained according to the address information, and the target video file is generated according to the video data and the plug-in audio data, so that the video player plays the target video file, and therefore, the effect of playing the plug-in audio track can be achieved by one player, and the problems of compatibility and difficulty in audio and video synchronization of a multi-instance player are solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the invention;
fig. 2 is a schematic flow chart of a video generation method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for decapsulating an outsourced audio track file to obtain an outsourced audio data by a client local proxy server according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a specific example of a video generation method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a video playing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of a hardware structure of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, a schematic diagram of an implementation environment according to an embodiment of the present invention is shown, where the implementation environment may include a video playing client 110, a video server 120, and a content distribution network 130.
The video playing client 110 is configured with a client local proxy server 111, where the client local proxy server 111 may be a local http (hypertext Transfer Protocol) server created at the client, and may be packaged with the client or separated from the client. The client local proxy server 111 may communicate with the video playback client over a local data communication path 112. The video playing client 110 may be an application installed in a terminal with a video playing function, and the terminal may be, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a digital assistant, a smart wearable device, and the like.
The video server 120 may be a server providing an online video playing service, which may be an independently operating server, or a server cluster composed of a plurality of servers.
The content distribution network 130 is a one-layer intelligent virtual network based on the existing internet, which is formed by placing node servers at various positions of the network, and the content distribution network can redirect the user's request to the service node closest to the user in real time according to the network traffic, the connection of each node, the load condition, the distance to the user, the response time and other comprehensive information. In the embodiment of the present invention, the content distribution network 130 may store video files, audio files, and the like required for the video playing client 110 to play videos.
Both the video server 120 and the content distribution network 130 may communicate with the video playback client 110 over a network, which may be a wired network or a wireless network.
Referring to fig. 2, a flow chart of a video generation method according to an embodiment of the present invention is shown, where the method can be applied to the client-side home agent server in fig. 1. It is noted that the present specification provides the method steps as described in the examples or flowcharts, but may include more or less steps based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In actual system or product execution, sequential execution or parallel execution (e.g., parallel processor or multi-threaded environment) may be possible according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
s201, receiving first address information and second address information returned by a video server aiming at a playing request of a target video sent by the video playing client.
The first address information is the address information of an original video file matched with the target video, and the second address information is the address information of an add-on audio track file matched with the target video.
In this embodiment of the present description, when a user wants to watch a certain video in a video playing client, the user may click to play the target video, so that the video playing client sends a playing request of the target video to the video server in response to a click operation of the user. Correspondingly, the video server receives a playing request of the target video, and determines first address information of an original video file matched with the target video and second address information of a plug-in audio track file matched with the target video according to the playing request. In practical application, the video server may store the first address information and the second address information corresponding to the video identifier according to the video identifier, the play request may carry the video identifier of the target video, and the video server may find the first address information and the second address information by matching the video identifier. Wherein the video identification can be used for the video server to uniquely identify a specific video.
A plug-in soundtrack file refers to an audio data file that matches a target video but is independent of the corresponding original video file. The external audio track file matched with the target video can be an audio track file of a single audio track, and can also comprise audio track files of a plurality of different audio tracks, wherein audio data of a certain language (such as English, Chinese and the like) is recorded in the audio track file of each audio track.
In the embodiment of the specification, the client local proxy server receives first address information of an original video file returned by the video server and second address information of a plug-in audio track file.
It should be noted that, if the target video does not have the external audio track file, the video server may only search the first address information of the original video file matched with the target video, at this time, the video server may only return the first address information of the original video file, and correspondingly, the client-side local proxy server may receive the returned first address information of the original video file.
S203, acquiring the video data in the original video file according to the first address information.
In this embodiment of the present description, after receiving the first address information, the client-side local proxy server may send a first content acquisition request to the content distribution network, where the first content acquisition request carries the first address information, so that the content distribution network determines an original video file matched with the first address information, and sends the original video file to the client-side local proxy server; correspondingly, the client local proxy server receives the original video file returned by the content distribution network.
In practical applications, the video data and the audio data are both encapsulated in an original video file, and the original video file corresponds to a multimedia container having a certain video container format, which may include but is not limited to hls, avi, mp4, mkv, flv, rm/rmvb, mov, ts, vob, dat, etc. The video data and the audio data in the original video file can be in a compressed format or a non-compressed format, wherein the compressed format of the video data can be but is not limited to mpeg2, mpeg4, H264, VC1, Rm/Rmvb and the like; the compression format of the audio data may include, but is not limited to, MPA, AAC, AC3, DTS, and the like.
Therefore, in step S203, the client-side local proxy server further needs to decapsulate the original video file to separate the video data and the audio data therein, so that the individual video data therein can be obtained. Specifically, the original video file can be unpacked by using the multimedia video processing tool FFmpeg, which is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams, and includes the very advanced audio/video codec libavcodec, of course, the FFmpeg can also integrate other codec libraries, such as x264, fac, lamc, fdkaac, etc.
S205, acquiring the plug-in audio data in the plug-in audio track file according to the second address information.
In this embodiment of the present specification, after receiving the second address information, the client-side local proxy server may send a second content acquisition request to the content distribution network, where the second content acquisition request carries the second address information, so that the content distribution network determines a plug-in audio track file matched with the second address information, and sends the plug-in audio track file to the client-side local proxy server; correspondingly, the local proxy server of the client receives the plug-in audio track file returned by the content distribution network.
In practice, the external audio track file is also a packaged audio container format file, which may include, but is not limited to, MP3, WAV, AAC, APE, FLAC, and so on. The external audio data in the external audio track file can be in a compressed format or an uncompressed format, wherein the audio compressed format can include but is not limited to MPA, AAC, AC3, DTS, and the like. Therefore, in step S205, the client-side local proxy server further needs to decapsulate the external audio track file to obtain the external audio data, and specifically, the aforementioned multimedia video processing tool FFmpeg may also be used to decapsulate the external audio track file.
In some embodiments, the add-on soundtrack files returned by the content delivery network to the client-side local proxy server include soundtrack files corresponding to different soundtracks, for example, add-on english files, add-on chinese files, etc., in which case the client-side local proxy server may adopt the method of fig. 3 when unpacking the add-on soundtrack files to obtain the add-on audio data, as described in fig. 3, and the method may include:
s301, sending the audio track identifier of the audio track file to a video playing client, so that the video playing client displays the audio track identifier.
The audio track identifier may be a language name of the audio track, such as Chinese, English, etc., or may be a language code, such as chs (Chinese), eng (English), etc. The client local proxy server sends the audio track identification of each audio track file to the video playing client so that the video playing client can display the audio track identification, and therefore a user can select according to different requirements of the user on voice.
S303, determining a target audio track identity in response to a selection signal for said audio track identity.
In particular, when a user selects a certain track identity, e.g., clicks on a certain track identity, the client local proxy server may determine a target track identity in response to a selection signal for the track identity. For example, the user clicks on the chinese language, the client local proxy server determines the chinese language as the target track identification.
S305, de-encapsulating the target audio track file corresponding to the target audio track identifier to obtain target audio data, wherein the target audio data is used as the plug-in audio data.
Specifically, the client-side local proxy server only decapsulates the target audio track file corresponding to the target audio track identifier, such as Chinese, so as to obtain target audio data, and uses the target audio data as the obtained plug-in audio data.
It should be noted that, step S203 and step S205 may be executed separately as described above, or may be combined into one step and executed simultaneously, that is, the client-side local proxy server may send a content obtaining request to the content distribution network, where the content obtaining request includes the first address information and the second address information at the same time, the content distribution network determines the corresponding original video file and the corresponding external audio track file according to the first address information and the second address information, and returns the original video file and the external audio track file to the client-side local proxy server at the same time, and after receiving the original video file and the external audio track file, the client-side local proxy server decapsulates the original video file and the external audio track file, respectively, so as to obtain the corresponding video data and external audio data.
And S207, generating a target video file according to the video data and the plug-in audio data so that the video playing client can acquire and play the target video file.
Specifically, the video data and the plug-in audio data may be repackaged by using a multimedia video processing tool FFmpeg to obtain a target video file, and a video container format of the repackaged target video file may be the same as a video container format of the original video file, for example, MP4 or HLS, or may be different from the video container format of the original video file, for example, the target video file may be a preset video container format, and the preset video container format may be, but not limited to, HLS, avi, MP4, mkv, flv, rm/rmvb, mov, ts, vob, dat, and the like.
The client local proxy server can send the target video file to the video playing client after generating the target video file, and the video playing client can perform playing operation of the target video file after acquiring the target video file.
In practical application, before the client local proxy server generates the target video file, whether the sound effect level of the plug-in audio data is higher than that of the audio data in the original video file can be judged, and if the judgment result is yes, the step of generating the target video file according to the video data and the plug-in audio data is executed. Specifically, the sound effect level may include a normal sound effect, a dolby sound effect, and the like, so that it may be ensured that the regenerated target video file has a better sound effect experience. For example, when the sound effect level of the plug-in audio data is the dolby sound effect and the sound effect level of the audio data in the original video file is the common sound effect, the step of generating the target video file according to the video data and the plug-in audio data can be executed; and when the sound effect grade of the plug-in audio data and the sound effect grade of the audio data in the original video file are both common sound effects, the step of generating the target video file can not be executed.
In this example, a client-side local proxy server may provide a local proxy service and a package-transfer service, where the local proxy service may be used to interact with a video server and a content distribution network to obtain an original video file and a plug-in audio track file; the decapsulation service may be used to decapsulate original video files and add-on audio track files, as well as to repackage video data and add-on audio data.
As shown in fig. 4, the home agent service acquires an original video file in the HLS format, which includes video data in a compressed format (the compressed format is H264) and audio data (the compressed format is AAC), and acquires an add-on audio track file in the MP3 format, which includes add-on audio data in a compressed format of AC 3. In this example, the video data in the original video file may be enhanced Dolby vision (Dolby vision) or normal video, and the audio data therein is normal audio without enhancement; the external audio data in the external audio track file is enhanced Dolby audio (Dolby audio). The method comprises the steps that a local agent service forwards original video files in an HLS format and plug-in audio track files in an MP3 format to a trans-encapsulation service, the trans-encapsulation service firstly decapsulates the original video files and the plug-in audio track files to obtain video data in an H264 format and plug-in audio data in an AC3 format, then repackages the video data in the H264 format and the plug-in audio data in the AC3 format into target video files in the HLS format, returns the target video files to the local agent service, and sends the target video files to a video playing client through the local agent service.
In practical application, because Dolby audio (Dolby audio) is plug-in independent audio stream, the method of the embodiment of the invention can synthesize common video and corresponding Dolby audio (Dolby audio) into new video stream for subsequent playing at the client side, thereby realizing the effect of playing plug-in audio track by adopting one player, and avoiding the compatibility problem and the problem of difficult audio and video synchronization existing in a multi-instance player at the terminal side.
In addition, because the client local proxy server is configured on the client side, the transcoding package in the video generation method is realized on the client side, thereby avoiding the server resource waste caused by transcoding in the background and simultaneously improving the flexibility.
Referring to fig. 5, a flow chart of a video playing method according to an embodiment of the present invention is shown, where the video playing method can be applied to the video playing client in fig. 1, and as shown in fig. 5, the method can include:
s501, sending a playing request of a target video to a video server, so that the video server determines first address information of an original video file matched with the target video and second address information of a plug-in audio track file according to the playing request.
In this embodiment of the present description, when a user wants to watch a certain video in a video playing client, the user may click to play the target video, so that the video playing client sends a playing request of the target video to the video server in response to a click operation of the user. Correspondingly, the video server receives a playing request of the target video, and determines first address information of an original video file matched with the target video and second address information of a plug-in audio track file matched with the target video according to the playing request.
In practical application, the video server may store the first address information and the second address information corresponding to the video identifier according to the video identifier, the play request may carry the video identifier of the target video, and the video server may find the first address information and the second address information by matching the video identifier. Wherein the video identification can be used for the video server to uniquely identify a specific video.
S503, receiving the first address information and the second address information returned by the video server by using a local proxy server of a client; acquiring video data in the original video file according to the first address information; acquiring the external audio data in the external audio track file according to the second address information; and generating a target video file according to the video data and the plug-in audio data.
The specific content of step S503 may refer to the corresponding description in the foregoing method embodiment shown in fig. 2, and is not described herein again.
And S505, acquiring the target video file from the client local proxy server, and playing the target video file.
Specifically, the client local proxy server may send the target video file to the video playing client after generating the target video file, and accordingly, the video playing client receives the target video file and executes a playing operation on the target video file.
According to the video playing method provided by the embodiment of the invention, the video playing client does not need to sense the audio data in the original video file during playing through the transfer processing of the local proxy server of the client, and directly plays the video data and the plug-in audio data in the regenerated target video file, so that the effect of synchronously playing the plug-in audio track is achieved, better experience is brought to a user, and the problems of compatibility and difficulty in audio and video synchronization of a multi-instance player can be well avoided.
Corresponding to the video generation methods provided by the above embodiments, embodiments of the present invention further provide a video generation apparatus, and since the video generation apparatus provided by the embodiments of the present invention corresponds to the video generation methods provided by the above embodiments, the embodiments of the video generation method described above are also applicable to the video generation apparatus provided by the present embodiment, and are not described in detail in the present embodiment.
Referring to fig. 6, it is a schematic structural diagram of a video generating apparatus according to an embodiment of the present invention, where the apparatus has a function of implementing the video generating method in the foregoing method embodiment, where the function may be implemented by hardware, or may be implemented by hardware executing corresponding software, and the apparatus may be applied to a client-side local proxy server, where the client-side local proxy server is configured at a video playing client. As shown in fig. 6, the apparatus may include:
a first receiving module 610, configured to receive first address information and second address information, which are returned by a video server according to a playing request of a target video sent by a video playing client, where the first address information is address information of an original video file matched with the target video, and the second address information is address information of an add-on audio track file matched with the target video;
a first obtaining module 620, configured to obtain video data in the original video file according to the first address information;
a second obtaining module 630, configured to obtain the external audio data in the external audio track file according to the second address information;
the generating module 640 is configured to generate a target video file according to the video data and the plug-in audio data, so that the video playing client acquires and plays the target video file.
Optionally, the first obtaining module 620 may include:
a second sending module, configured to send a first content obtaining request to a content distribution network, where the first content obtaining request carries the first address information, so that the content distribution network determines an original video file matched with the first address information;
the second receiving module is used for receiving the original video file returned by the content distribution network;
and the first decapsulation module is used for decapsulating the original video file to obtain video data and audio data.
In some embodiments, the apparatus may further comprise:
the judging module is used for judging whether the sound effect level of the plug-in audio data is higher than the sound effect level of the audio data; when the result of the determination is yes, the function of the generation module 640 is performed.
Optionally, the second obtaining module 630 may include:
a third sending module, configured to send a second content obtaining request to a content distribution network, where the second content obtaining request carries the second address information, so that the content distribution network determines a plug-in audio track file matched with the second address information;
the third receiving module is used for receiving the plug-in audio track files returned by the content distribution network;
and the second decapsulation module is used for decapsulating the plug-in audio track file to obtain plug-in audio data.
In some embodiments, the add-on audio track files include audio track files corresponding to different audio tracks;
accordingly, the second decapsulation module may include:
the fourth sending module is used for sending the audio track identifier of the audio track file to the video playing client so that the video playing client can display the audio track identifier;
a track identity determination module for determining a target track identity in response to a selection signal for the track identity;
and the decapsulation submodule is used for decapsulating the target audio track file corresponding to the target audio track identifier to obtain target audio data, and the target audio data is used as the plug-in audio data.
Please refer to fig. 7, which is a schematic structural diagram illustrating a video playing apparatus according to an embodiment of the present invention, where the apparatus has a function of implementing the video playing method in the foregoing method embodiment, the function may be implemented by hardware, or may be implemented by hardware executing corresponding software, and the apparatus may be applied to a video playing client, where the video playing client is configured with a client local proxy server. As shown in fig. 7, the apparatus may include:
a first sending module 710, configured to send a play request of a target video to a video server, so that the video server determines, according to the play request, first address information of an original video file matched with the target video and second address information of a plug-in audio track file;
a target video generating module 720, configured to receive, by using the client-side local proxy server, the first address information and the second address information returned by the video server; acquiring video data in the original video file according to the first address information, and acquiring plug-in audio data in the plug-in audio track file according to the second address information; generating a target video file according to the video data and the plug-in audio data;
the playing module 730 is configured to obtain the target video file from the client local proxy server, and play the target video file.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
The video generating device and the video playing device in the embodiment of the invention receive the address information returned by the video server through the client local proxy server, acquire the corresponding video data in the original video file and the plug-in audio data in the plug-in audio track file according to the address information, and generate the target video file according to the video data and the plug-in audio data, so that the video player plays the target video file, thereby realizing that one player can also play the plug-in audio track, avoiding the compatibility problem of a multi-instance player and the problem that the audio and the video cannot be synchronized, and being beneficial to improving the experience of users.
An embodiment of the present invention provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the video generation method provided in the foregoing method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and determine parameter thresholds in the search service by executing the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The method provided by the embodiment of the invention can be executed in a computer terminal, a server or a similar operation device. Taking an example of the terminal running on the terminal, fig. 8 is a block diagram of a hardware structure of the terminal running a video generation method according to an embodiment of the present invention, specifically:
the terminal 800 may include RF (Radio Frequency) circuitry 810, memory 820 including one or more computer-readable storage media, an input unit 830, a display unit 840, a video sensor 850, audio circuitry 860, a WiFi (wireless fidelity) module 870, a processor 880 including one or more processing cores, and a power supply 80. Those skilled in the art will appreciate that the terminal structure shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 810 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information from a base station and then processing the received downlink information by the one or more processors 880; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 810 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuit 810 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), email, SMS (Short Messaging Service), and the like.
The memory 820 may be used to store software programs and modules, and the processor 880 executes various functional applications and data processing by operating the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as video data, a phone book, etc.) created according to the use of the terminal 800, and the like. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 820 may also include a memory controller to provide the processor 880 and the input unit 830 access to the memory 820.
The input unit 830 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Specifically, the input unit 830 may include an image input device 831 and other input devices 832. The image input device 831 may be a camera or a photoelectric scanning device. The input unit 830 may include other input devices 832 in addition to the image input device 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 840 may be used to display information input by or provided to a user and various graphical user interfaces of the terminal 800, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 840 may include a Display panel 841, and the Display panel 841 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like, as an option.
The terminal 800 can include at least one video sensor 850 for obtaining video information of a user. The terminal 800 can also include other sensors (not shown), such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 841 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 841 and/or backlight when the terminal 800 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the terminal 800, further description is omitted here.
WiFi belongs to short-range wireless transmission technology, and the terminal 800 can help the user send and receive e-mails, browse web pages, access streaming media, etc. through the WiFi module 870, and it provides the user with wireless broadband internet access. Although fig. 8 shows WiFi module 870, it is understood that it does not belong to the essential constitution of terminal 800 and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 880 is a control center of the terminal 800, connects various parts of the entire handset using various interfaces and lines, and performs various functions of the terminal 800 and processes data by operating or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby integrally monitoring the handset. Optionally, processor 880 may include one or more processing cores; preferably, the processor 880 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 880.
The terminal 800 also includes a power supply 80 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 880 via a power management system that provides management of charging, discharging, and power consumption. The power supply 80 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal 800 may further include a bluetooth module or the like, which is not described in detail herein.
In this embodiment, the terminal 800 further comprises a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing the video generation method provided by the above-described method embodiments.
Embodiments of the present invention also provide a computer-readable storage medium, which may be disposed in a terminal to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a video generation method, and the at least one instruction, the at least one program, the code set, or the set of instructions are loaded and executed by the processor to implement the video generation method provided by the above-mentioned method embodiments.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A video generation method applied to a client-side local proxy server configured at a video playback client, the method comprising:
receiving first address information and second address information returned by a video server aiming at a playing request of a target video sent by a video playing client, wherein the first address information is address information of an original video file matched with the target video, and the second address information is address information of a plug-in audio track file matched with the target video;
acquiring video data in the original video file according to the first address information;
acquiring the external audio data in the external audio track file according to the second address information;
and generating a target video file according to the video data and the plug-in audio data so that the video playing client can acquire and play the target video file.
2. The video generation method according to claim 1, wherein the acquiring video data in the original video file according to the first address information includes:
sending a first content obtaining request to a content distribution network, wherein the first content obtaining request carries the first address information, so that the content distribution network determines an original video file matched with the first address information;
receiving the original video file returned by the content distribution network;
and decapsulating the original video file to obtain video data and audio data.
3. The video generation method according to claim 2, wherein before generating a target video file from the video data and the external audio data, the method further comprises:
judging whether the sound effect level of the plug-in audio data is higher than the sound effect level of the audio data;
and if so, executing the step of generating the target video file according to the video data and the plug-in audio data.
4. The video generation method according to claim 1, wherein the obtaining of the add-on audio data in the add-on audio track file according to the second address information comprises:
sending a second content obtaining request to a content distribution network, wherein the second content obtaining request carries the second address information, so that the content distribution network determines the plug-in audio track file matched with the second address information;
receiving the plug-in audio track file returned by the content distribution network;
and unpacking the plug-in audio track file to obtain plug-in audio data.
5. The video generation method of claim 4, wherein the add-on audio track file comprises audio track files corresponding to different audio tracks;
the decapsulating the plug-in audio track file to obtain the plug-in audio data includes:
sending the audio track identification of the audio track file to a video playing client so that the video playing client can display the audio track identification;
determining a target audio track identity in response to a selection signal for the audio track identity;
and decapsulating the target audio track file corresponding to the target audio track identifier to obtain target audio data, wherein the target audio data is used as the plug-in audio data.
6. A video playing method is applied to a video playing client, wherein the video playing client is configured with a client local proxy server, and the method comprises the following steps:
sending a playing request of a target video to a video server so that the video server determines first address information of an original video file matched with the target video and second address information of an add-on audio track file according to the playing request;
receiving the first address information and the second address information returned by the video server by utilizing the client local proxy server; acquiring video data in the original video file according to the first address information; acquiring the external audio data in the external audio track file according to the second address information; generating a target video file according to the video data and the plug-in audio data;
and acquiring the target video file from the local proxy server of the client side, and playing the target video file.
7. A video generation apparatus applied to a client-side local proxy server configured at a video playback client, the apparatus comprising:
the first receiving module is used for receiving first address information and second address information returned by a video server aiming at a playing request of a target video sent by a video playing client, wherein the first address information is address information of an original video file matched with the target video, and the second address information is address information of an add-on audio track file matched with the target video;
the first obtaining module is used for obtaining the video data in the original video file according to the first address information;
the second acquisition module is used for acquiring the plug-in audio data in the plug-in audio track file according to the second address information;
and the generating module is used for generating a target video file according to the video data and the plug-in audio data so as to enable the video playing client to acquire and play the target video file.
8. A video playback apparatus applied to a video playback client configured with a client local proxy server, the apparatus comprising:
the system comprises a first sending module, a second sending module and a third sending module, wherein the first sending module is used for sending a playing request of a target video to a video server so that the video server determines first address information of an original video file matched with the target video and second address information of a plug-in audio track file according to the playing request;
the target video generation module is used for receiving the first address information and the second address information returned by the video server by utilizing the client local proxy server; acquiring video data in the original video file according to the first address information, and acquiring plug-in audio data in the plug-in audio track file according to the second address information; generating a target video file according to the video data and the plug-in audio data;
and the playing module is used for acquiring the target video file from the client local proxy server and playing the target video file.
9. A terminal comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by the processor to implement a video generation method as claimed in any one of claims 1 to 5.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the video generation method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910608929.5A CN112203116A (en) | 2019-07-08 | 2019-07-08 | Video generation method, video playing method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910608929.5A CN112203116A (en) | 2019-07-08 | 2019-07-08 | Video generation method, video playing method and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112203116A true CN112203116A (en) | 2021-01-08 |
Family
ID=74004366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910608929.5A Pending CN112203116A (en) | 2019-07-08 | 2019-07-08 | Video generation method, video playing method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112203116A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112911364A (en) * | 2021-01-18 | 2021-06-04 | 珠海全志科技股份有限公司 | Audio and video playing method, computer device and computer readable storage medium |
CN115103222A (en) * | 2022-06-24 | 2022-09-23 | 湖南快乐阳光互动娱乐传媒有限公司 | Video audio track processing method and related equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101002500A (en) * | 2004-08-12 | 2007-07-18 | 皇家飞利浦电子股份有限公司 | Audio source selection |
CN103702172A (en) * | 2013-12-13 | 2014-04-02 | 乐视网信息技术(北京)股份有限公司 | Method and system for carrying out dolby transcoding on AV (Audio/Video) |
US20140297882A1 (en) * | 2013-04-01 | 2014-10-02 | Microsoft Corporation | Dynamic track switching in media streaming |
CN105898354A (en) * | 2015-12-07 | 2016-08-24 | 乐视云计算有限公司 | Video file multi-audio-track storage method and device |
CN105979349A (en) * | 2015-12-03 | 2016-09-28 | 乐视致新电子科技(天津)有限公司 | Audio frequency data processing method and device |
CN106375821A (en) * | 2016-08-30 | 2017-02-01 | 北京奇艺世纪科技有限公司 | Audio and video playing method and device |
CN108259989A (en) * | 2018-01-19 | 2018-07-06 | 广州华多网络科技有限公司 | Method, computer readable storage medium and the terminal device of net cast |
US20180295427A1 (en) * | 2017-04-07 | 2018-10-11 | David Leiberman | Systems and methods for creating composite videos |
CN109495703A (en) * | 2018-11-09 | 2019-03-19 | 广州长嘉电子有限公司 | A kind of the simulated television playback method and system of plug-in coaxial audio-frequency module |
-
2019
- 2019-07-08 CN CN201910608929.5A patent/CN112203116A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101002500A (en) * | 2004-08-12 | 2007-07-18 | 皇家飞利浦电子股份有限公司 | Audio source selection |
US20140297882A1 (en) * | 2013-04-01 | 2014-10-02 | Microsoft Corporation | Dynamic track switching in media streaming |
CN103702172A (en) * | 2013-12-13 | 2014-04-02 | 乐视网信息技术(北京)股份有限公司 | Method and system for carrying out dolby transcoding on AV (Audio/Video) |
CN105979349A (en) * | 2015-12-03 | 2016-09-28 | 乐视致新电子科技(天津)有限公司 | Audio frequency data processing method and device |
CN105898354A (en) * | 2015-12-07 | 2016-08-24 | 乐视云计算有限公司 | Video file multi-audio-track storage method and device |
CN106375821A (en) * | 2016-08-30 | 2017-02-01 | 北京奇艺世纪科技有限公司 | Audio and video playing method and device |
US20180295427A1 (en) * | 2017-04-07 | 2018-10-11 | David Leiberman | Systems and methods for creating composite videos |
CN108259989A (en) * | 2018-01-19 | 2018-07-06 | 广州华多网络科技有限公司 | Method, computer readable storage medium and the terminal device of net cast |
CN109495703A (en) * | 2018-11-09 | 2019-03-19 | 广州长嘉电子有限公司 | A kind of the simulated television playback method and system of plug-in coaxial audio-frequency module |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112911364A (en) * | 2021-01-18 | 2021-06-04 | 珠海全志科技股份有限公司 | Audio and video playing method, computer device and computer readable storage medium |
CN115103222A (en) * | 2022-06-24 | 2022-09-23 | 湖南快乐阳光互动娱乐传媒有限公司 | Video audio track processing method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017202348A1 (en) | Video playing method and device, and computer storage medium | |
WO2017008627A1 (en) | Multimedia live broadcast method, apparatus and system | |
US20170302990A1 (en) | Method, terminal, and system for processing data of video stream | |
US9924205B2 (en) | Video remote-commentary synchronization method and system, and terminal device | |
CN107995519B (en) | Method, device and storage medium for playing multimedia file | |
WO2017076143A1 (en) | Method, apparatus, and system for switching video live stream to video-on-demand data | |
CN107360458B (en) | Play control method, device, storage medium and terminal | |
CN106789562B (en) | Virtual article sending method, virtual article receiving method, virtual article sending device, virtual article receiving device and virtual article sending system | |
CN103391473B (en) | Method and device for providing and acquiring audio and video | |
CN110784771B (en) | Video sharing method and electronic equipment | |
CN107332976B (en) | Karaoke method, device, equipment and system | |
CN106254903B (en) | A kind of synchronous broadcast method of multi-medium data, apparatus and system | |
CN105430424A (en) | Video live broadcast method, device and system | |
JP2016506007A (en) | Recording method, reproducing method, apparatus, terminal, system, program, and recording medium | |
US10675541B2 (en) | Control method of scene sound effect and related products | |
WO2017215661A1 (en) | Scenario-based sound effect control method and electronic device | |
US20160133006A1 (en) | Video processing method and apparatus | |
CN106448714A (en) | Synchronous playing method of playing devices, apparatus and system thereof | |
CN109995743B (en) | Multimedia file processing method and terminal | |
WO2019076250A1 (en) | Push message management method and related products | |
CN111641864B (en) | Video information acquisition method, device and equipment | |
CN112203116A (en) | Video generation method, video playing method and related equipment | |
WO2014194754A1 (en) | Method, system, and terminal device for transmitting information | |
CN106303616B (en) | Play control method, device and terminal | |
CN110198452B (en) | Live video previewing method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210108 |