CN109640125A - Video content processing method, device, server and storage medium - Google Patents
Video content processing method, device, server and storage medium Download PDFInfo
- Publication number
- CN109640125A CN109640125A CN201811575177.9A CN201811575177A CN109640125A CN 109640125 A CN109640125 A CN 109640125A CN 201811575177 A CN201811575177 A CN 201811575177A CN 109640125 A CN109640125 A CN 109640125A
- Authority
- CN
- China
- Prior art keywords
- video
- audio
- target
- playing
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 71
- 238000012545 processing Methods 0.000 claims description 47
- 230000008569 process Effects 0.000 claims description 29
- 239000012634 fragment Substances 0.000 claims description 13
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 13
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 5
- 238000013467 fragmentation Methods 0.000 description 3
- 238000006062 fragmentation reaction Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000012550 audit Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/43615—Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43637—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
This application discloses a kind of video content processing method, device, server and storage mediums, belong to multimedia technology field.The described method includes: receiving the target video that video platform is sent, target video is the video that first terminal is uploaded to video platform, and video platform is used to indicate the image information and audio-frequency information that target video is played when Video Applications play target video;The audio playing request that second terminal is sent is received, audio playing request starts to play target audio for requesting;Picture play instruction is sent to second terminal according to target video, picture play instruction is used to indicate the image information that target video is played when voice applications play target audio.The embodiment of the present application is issued in video platform and music platform respectively by the target video for uploading first terminal, improves the utilization rate of target video.
Description
Technical Field
The embodiment of the application relates to the technical field of multimedia, in particular to a video content processing method, a video content processing device, a video content processing server and a storage medium.
Background
The video content processing method comprises a method for publishing and/or playing the video content.
In the related art, a user records a video through a terminal, the recorded video is uploaded to a server, and correspondingly, the server checks the received video, and if the check is passed, the server issues the video to a corresponding video application. The terminal may play the video through a video application.
Disclosure of Invention
The embodiment of the application provides a video content processing method, a video content processing device, a server and a storage medium, which can be used for solving the problem of low utilization rate caused by the fact that an uploaded video is only published on a video application, namely, is only used for video playing in the related art. The technical scheme is as follows:
according to an aspect of the embodiments of the present application, there is provided a video content processing method for use in a music platform, the method including:
receiving a target video sent by a video platform, wherein the target video is a video uploaded to the video platform by a first terminal, and the video platform is used for indicating a video application to play picture information and audio information of the target video when playing the target video;
receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play picture information of the target video when playing the target audio.
Optionally, the sending a picture playing instruction to the second terminal according to the target video includes:
obtaining a plurality of candidate videos matched with the target audio, wherein the candidate videos comprise the target video;
and sending the picture playing instruction to the second terminal, wherein the picture playing instruction is further used for instructing the audio application to play picture information corresponding to the candidate videos when playing the target audio.
Optionally, the obtaining a plurality of candidate videos matching the target audio includes:
acquiring an audio tag of the target audio, wherein the audio tag is used for indicating the type of the target audio;
acquiring a target video classification group corresponding to the audio label according to a first corresponding relation, wherein the first corresponding relation comprises the corresponding relation between the audio label and the video classification group;
obtaining the plurality of candidate videos from the target video classification group.
Optionally, the sending the picture playing instruction to the second terminal, where a sum of first playing durations of the multiple candidate videos is less than or equal to a playing duration of the target audio, includes:
and sending a first picture playing instruction to the second terminal, wherein the first picture playing instruction is used for indicating that picture information corresponding to the candidate videos is played in sequence in the process of playing the target audio.
Optionally, the sending the picture playing instruction to the second terminal, where a sum of first playing durations of the multiple candidate videos is greater than a playing duration of the target audio, includes:
clipping the candidate videos with the sum of the first playing time lengths to the candidate videos with the sum of the second playing time lengths, and sending a second picture playing instruction to the second terminal, wherein the second picture playing instruction is used for instructing to sequentially play picture information corresponding to the candidate videos with the sum of the second playing time lengths in the process of playing the target audio, and the sum of the second playing time lengths is less than or equal to the playing time length of the target audio;
or,
and sending a third picture playing instruction to the second terminal, wherein the third picture playing instruction is used for instructing picture information corresponding to the candidate videos to be played in sequence in the process of playing the target audio, and stopping playing the picture information when the target audio is played.
Optionally, after receiving the target video sent by the video platform, the method further includes:
determining a video type identifier of the target video according to the video content of the target video; or, acquiring a video type identifier of the target video from a video uploading request of the first terminal forwarded by the video platform;
acquiring a target video classification group corresponding to the video type identifier according to a second corresponding relation, wherein the second corresponding relation comprises the corresponding relation between the video type identifier and the video classification group;
storing the target video in the target video classification group.
Optionally, the target audio includes a plurality of audio segments, and the obtaining a plurality of candidate videos matching the target audio includes:
acquiring first candidate videos corresponding to the plurality of audio clips from a video library according to a third corresponding relation, wherein the third corresponding relation comprises a corresponding relation between the audio clips and the first candidate videos;
the sending of the picture playing instruction to the second terminal includes:
and sending a fourth picture playing instruction to the second terminal, wherein the fourth picture playing instruction is used for instructing the audio application to play picture information corresponding to the plurality of first candidate videos when playing the target audio.
Optionally, the method further includes:
receiving a picture switching request sent by a second terminal, wherein the picture switching request carries a target playing time point of the target audio when the second terminal receives a picture switching operation signal;
acquiring a second candidate video corresponding to the first target audio clip where the target playing time point is located;
and sending a picture switching instruction to the second terminal, wherein the picture switching instruction is used for instructing the audio application to start switching and playing the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip.
Optionally, the third playing time length of the first candidate video is longer than the playing time length of the corresponding audio clip, and the method includes:
clipping the first candidate video with the third playing time length to the first candidate video with a fourth playing time length, wherein the fourth playing time length is less than or equal to the playing time length of the corresponding audio clip.
Optionally, before the obtaining of the plurality of candidate videos matching the target audio, the method further includes:
and carrying out fragment processing on the target audio according to the beat characteristics of the target audio to obtain the plurality of audio fragments.
According to another aspect of the embodiments of the present application, there is provided a video content processing method for use in a music platform, the method including:
receiving a target video uploaded by a first terminal;
sending the target video to a video platform, wherein the video platform is used for indicating a video application to play picture information and audio information of the target video when playing the target video;
receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play picture information of the target video when playing the target audio.
According to another aspect of the embodiments of the present application, there is provided a video content processing apparatus for use in a music platform, the apparatus including:
the first receiving module is used for receiving a target video sent by a video platform, wherein the target video is a video uploaded to the video platform by a first terminal, and the video platform is used for indicating a video application to play picture information and audio information of the target video when the target video is played;
the second receiving module is used for receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and the playing instruction sending module is used for sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play the picture information of the target video when playing the target audio.
Optionally, the playing instruction sending module further includes an obtaining unit and a sending unit.
The acquiring unit is used for acquiring a plurality of candidate videos matched with the target audio, wherein the candidate videos comprise the target video;
the sending unit is configured to send the picture playing instruction to the second terminal, where the picture playing instruction is further configured to instruct the audio application to play picture information corresponding to each of the multiple candidate videos when playing the target audio.
Optionally, the obtaining unit is further configured to obtain an audio tag of the target audio, where the audio tag is used to indicate a type of the target audio; acquiring a target video classification group corresponding to the audio label according to a first corresponding relation, wherein the first corresponding relation comprises the corresponding relation between the audio label and the video classification group; obtaining the plurality of candidate videos from the target video classification group.
Optionally, a sum of first playing durations of the candidate videos is less than or equal to a playing duration of the target audio, and the sending unit is further configured to send a first picture playing instruction to the second terminal, where the first picture playing instruction is used to instruct to sequentially play picture information corresponding to each of the candidate videos in a process of playing the target audio.
Optionally, the sum of the first playing time lengths of the multiple candidate videos is greater than the playing time length of the target audio, the sending unit is further configured to clip the multiple candidate videos with the sum of the first playing time lengths to the multiple candidate videos with the sum of the second playing time lengths, and send a second picture playing instruction to the second terminal, where the second picture playing instruction is used to instruct to sequentially play picture information corresponding to each of the multiple candidate videos with the sum of the second playing time lengths in the process of playing the target audio, and the sum of the second playing time lengths is less than or equal to the playing time length of the target audio;
or,
and sending a third picture playing instruction to the second terminal, wherein the third picture playing instruction is used for instructing picture information corresponding to the candidate videos to be played in sequence in the process of playing the target audio, and stopping playing the picture information when the target audio is played.
Optionally, the apparatus further comprises: the storage module is used for determining the video type identifier of the target video according to the video content of the target video; or, acquiring a video type identifier of the target video from a video uploading request of the first terminal forwarded by the video platform;
acquiring a target video classification group corresponding to the video type identifier according to a second corresponding relation, wherein the second corresponding relation comprises the corresponding relation between the video type identifier and the video classification group;
storing the target video in the target video classification group.
Optionally, the target audio includes a plurality of audio segments, and the sending unit is further configured to obtain, from a video library, first candidate videos corresponding to the plurality of audio segments according to a third correspondence, where the third correspondence includes a correspondence between the audio segment and the first candidate video;
and sending a fourth picture playing instruction to the second terminal, wherein the fourth picture playing instruction is used for instructing the audio application to play picture information corresponding to the plurality of first candidate videos when playing the target audio.
Optionally, the apparatus further includes: the switching instruction sending module is used for receiving a picture switching request sent by a second terminal, wherein the picture switching request carries a target playing time point of the target audio when the second terminal receives a picture switching operation signal;
acquiring a second candidate video corresponding to the first target audio clip where the target playing time point is located;
and sending a picture switching instruction to the second terminal, wherein the picture switching instruction is used for instructing the audio application to start switching and playing the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip.
Optionally, the third playing duration of the first candidate video is longer than the playing duration of the corresponding audio clip, and the apparatus further includes: and the cropping module is used for cropping the first candidate video with the third playing time length to the first candidate video with a fourth playing time length, wherein the fourth playing time length is less than or equal to the playing time length of the corresponding audio clip.
Optionally, the device further includes a fragment processing module, where the fragment processing module is configured to perform fragment processing on the target audio according to the beat feature of the target audio to obtain the multiple audio fragments.
According to another aspect of the embodiments of the present application, there is provided a video content processing apparatus for use in a music platform, the apparatus including:
the first receiving module is used for receiving a target video uploaded by a first terminal;
the first sending module is used for sending the target video to a video platform, and the video platform is used for indicating a video application to play picture information and audio information of the target video when playing the target video;
the second receiving module is used for receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and the second sending module is used for sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play the picture information of the target video when playing the target audio.
According to another aspect of embodiments of the present application, there is provided a server comprising a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the video content processing method of the first aspect or any one of the possible implementations of the first method or the second method.
According to another aspect of embodiments of the present application, there is provided a computer-readable storage medium storing at least one instruction for execution by a processor to implement the video content processing method according to the first aspect, or any one of the possible implementation manners of the first method, or the second method.
In the video content processing method provided by this embodiment, a music platform is used to receive a target video sent by a video platform, receive an audio playing request of a target audio sent by a second terminal, and send a picture playing instruction to the second terminal according to the target video, where the picture playing instruction is used to instruct an audio application to play picture information of the target video when playing the target audio; the problem that the utilization rate is low due to the fact that the uploaded video is only used for video playing in the related technology is solved, the target video uploaded by the first terminal can be respectively published in the video platform and the music platform, the target video can be played through video application, the picture information of the target video can be played when the target audio is played through audio application, and the utilization rate of the target video is improved.
Drawings
FIG. 1 is a schematic block diagram of a video processing system provided in an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method for processing video content provided by an exemplary embodiment of the present application;
fig. 3 is a flowchart of a video content processing method provided by another exemplary embodiment of the present application;
fig. 4 is a flowchart of a video content processing method provided by another exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of a video content processing apparatus according to an exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of a video content processing apparatus according to an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal provided in an exemplary embodiment of the present application;
fig. 8 is a schematic structural diagram of a server according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Before explaining the embodiments of the present application, an application scenario of the embodiments of the present application is explained. The video processing system includes a server 110 and a first terminal 120.
The server 110 is a server, or a plurality of servers, or a virtualization platform, or a cloud computing service center.
Optionally, the server 110 includes a background server that provides video processing services. Optionally, the server 110 includes a first network platform 112 and a second network platform 114, the first network platform 112 is one of a music platform and a video platform, and the second network platform 114 is the other of the music platform and the video platform. The music platform is a background server corresponding to the audio application. The video platform is a background server corresponding to the video application. Illustratively, the video platform is a short video platform, the video application is a short video application, and the audio application is a music application.
Optionally, the first network platform 112 is configured to receive the target video sent by the first terminal 120, and send the target video to the second network platform 114.
The server 110 and the first terminal 120 are connected through a communication network. Optionally, the communication network is a wired network or a wireless network.
The first terminal 120 is an electronic device having a video upload function. Optionally, the first terminal 120 is configured to upload the target video to the server 110. For example, the first terminal 120 may be a mobile phone, a tablet computer, a laptop portable computer, a desktop computer, and the like.
Alternatively, the first terminal 120 is an electronic device having a video recording function. The first terminal 120 is configured to record a target video through the image capturing component and the audio capturing component, and upload the recorded target video to the server 110.
Illustratively, a preset application runs in the first terminal 120, and the first terminal 120 is configured to upload the target video to the server 110 through the preset application. The preset application includes a video application and/or an audio application.
Optionally, the video processing system further comprises a second terminal 130. The second terminal 130 is an electronic device having a video playing function. Optionally, after the first network platform 112 sends the target video to the second network platform 114, the second terminal 130 is configured to play the picture information and the audio information of the target video when playing the target video through the video application, and/or play the picture information of the target video when playing the target audio through the audio application.
Optionally, a video application and/or an audio application is running in the second terminal 130. For example, the second terminal 130 may be a mobile phone, a tablet computer, a laptop portable computer, a desktop computer, and the like.
It should be noted that the first terminal 120 and the second terminal 130 may be the same terminal or two different terminals, which is not limited in this embodiment. The first terminal 120 includes one or more terminals, and the second terminal 130 includes one or more terminals, only one first terminal 120 and one second terminal 130 are shown in fig. 1, and the number of the first terminal 120 and the second terminal 130 is not limited in this embodiment.
Referring to fig. 2, a flow chart of a video content processing method according to an exemplary embodiment of the present application is shown. The present embodiment is exemplified by applying the video content processing method to the video processing system shown in fig. 1. The video content processing method comprises the following steps:
step 201, receiving a target video sent by a video platform, where the target video is a video uploaded to the video platform by a first terminal.
The video platform is used for indicating the video application to play the picture information and the audio information of the target video when playing the target video.
Optionally, the first terminal sends the target video to the video platform through the video application, and the video platform receives the target video and sends the target video to the music platform. Correspondingly, the music platform receives the target video.
Optionally, the target video is a video segment whose video duration is less than the duration threshold. Optionally, the target video is a video recorded or uploaded by the first terminal. For example, the duration threshold is 15 seconds, 20 seconds, or 1 minute. The video is original.
Optionally, the target video is also referred to as a short video, which is a way to spread internet content. The target video includes picture information and audio information. The target video is the video with the original picture information and/or audio information and the video duration being less than the duration threshold. Illustratively, the target video is a video in which picture information is recorded by a user and a duration of the audio information is less than a duration threshold. Illustratively, the target video is a video in which both picture information and audio information are recorded by a user and the video duration is less than a duration threshold.
Optionally, after receiving the target video sent by the video platform, the music platform stores the target video. Wherein the target video comprises the picture information and the audio information of the target video.
Step 202, receiving an audio playing request sent by the second terminal, where the audio playing request is used to request to start playing the target audio.
And the second terminal sends an audio playing request to the music platform through the audio application, wherein the audio playing request is used for requesting to start playing the target audio.
Optionally, the audio playing request carries an audio identifier of the target audio. The audio identification of the target audio is used to uniquely indicate the target audio.
Step 203, sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for instructing the audio application to play the picture information of the target video when playing the target audio.
Correspondingly, after receiving the audio playing request sent by the second terminal, the music platform sends a picture playing instruction to the second terminal. And the second terminal receives the picture playing instruction, plays the picture information of the target video when playing the target audio through the audio application according to the picture playing instruction, and does not play the audio information of the target video.
It should be noted that the above step 201 may be alternatively implemented as the following steps, as shown in fig. 3:
step 301, receiving a target video uploaded by a first terminal.
Optionally, the first terminal sends the target video to the music platform through the audio application, and correspondingly, the music platform receives the target video.
Step 302, sending the target video to a video platform, where the video platform is used to instruct a video application to play the picture information and the audio information of the target video when playing the target video.
Optionally, after receiving the target video, the music platform sends the target video to the video platform. Correspondingly, the video platform receives the target video sent by the music platform and stores the target video.
It should be noted that step 302 may be executed before step 202, may be executed after step 202, and may also be executed in parallel with step 202, which is not limited in this embodiment.
Optionally, when the video platform receives a video playing request sent by the second terminal, where the video playing request is used to request to start playing the target video, the video platform sends a video playing instruction to the second terminal, and the video playing instruction is used to instruct the video application to play the picture information and the audio information of the target video when playing the target video.
Correspondingly, the second terminal receives a video playing instruction sent by the video platform, and plays the picture information and the audio information of the target video when playing the target video through the video application.
It should be noted that the music platform acquires a target video to be processed, where the target video is a video uploaded to the music platform by the first terminal, or the target video is a video sent to the music platform by the first terminal through the video platform. This embodiment is not limited thereto. In the following embodiments, the target video is only the video transmitted by the first terminal to the music platform through the video platform.
In summary, in this embodiment, a target video sent by a video platform is received through a music platform, an audio playing request of a target audio sent by a second terminal is received, and a picture playing instruction is sent to the second terminal according to the target video, where the picture playing instruction is used to instruct an audio application to play picture information of the target video when playing the target audio; the problem that the utilization rate is low due to the fact that the uploaded video is only used for video playing in the related technology is solved, the target video uploaded by the first terminal can be respectively published in the video platform and the music platform, the target video can be played through video application, the picture information of the target video can be played when the target audio is played through audio application, and the utilization rate of the target video is improved.
Referring to fig. 4, a flowchart of a video content processing method according to an exemplary embodiment of the present application is shown. The present embodiment is exemplified by applying the video content processing method to the video processing system shown in fig. 1. The video content processing method comprises the following steps:
step 401, the first terminal sends a target video to a video platform.
Optionally, the first terminal sends the target video to the video platform through the video application.
Optionally, the first terminal displays a video creation interface of the video application, where the video creation interface includes a video creation entry, and when the first terminal receives a first trigger signal corresponding to the video creation entry, the first terminal acquires an uploaded target video.
The video creation portal is a portal provided in a video application. The video creation portal is an operable control for triggering acquisition of the uploaded target video. Illustratively, the type of video creation entry includes at least one of a button, a manipulable entry, and a slider. The present embodiment does not limit the location and type of the video creation entry.
The first trigger signal is used for triggering user operation for acquiring the uploaded target video. Illustratively, the first trigger signal includes any one or a combination of more of a click operation signal, a slide operation signal, a press operation signal and a long press operation signal.
The target video is a video which is recorded and uploaded after the first terminal receives the first trigger signal corresponding to the video creation entrance, or a finished video which is uploaded when the first terminal receives the first trigger signal corresponding to the video creation entrance. The recording timing of the target video is not limited in this embodiment.
Optionally, the first terminal sends a video upload request to the video platform, where the video upload request carries the target video.
Optionally, the video upload request further carries a video type identifier of the target video. The video type identifier is used to uniquely indicate the video type of the video.
In one possible implementation, the videos are classified according to different video materials, for example, the video types include at least one of a landscape type, a dance type and a singing type.
In another possible implementation, the videos are classified according to different video tempos, for example, the video types include a fast tempo type and a slow tempo type. The present embodiment does not limit the classification manner of the video.
Step 402, the video platform receives the target video and sends the target video to the music platform.
Optionally, the video platform receives a video uploading request sent by the first terminal, and sends the video uploading request to the music platform.
Optionally, the video platform receives the target video and stores the target video. When the video platform receives a video playing request sent by a second terminal, wherein the video playing request is used for requesting to start playing of a target video, the video platform sends a video playing instruction to the second terminal, and the video playing instruction is used for indicating a video application to play picture information and audio information of the target video when playing the target video.
Correspondingly, the second terminal receives a video playing instruction sent by the video platform, and plays the picture information and the audio information of the target video when playing the target video through the video application.
In step 403, the music platform receives the target video sent by the video platform.
Optionally, the music platform establishes a video library, performs audit on the target video, and stores the target video in the video library after the audit is passed.
Optionally, the music platform stores the target video in a video library, including: the music platform determines a video type identifier of a target video, and acquires a target video classification group corresponding to the video type identifier according to a second corresponding relation, wherein the second corresponding relation comprises a corresponding relation between the video type identifier and the video classification group; the target video is stored in the target video classification group.
Optionally, a video library is established in the music platform, and the plurality of video classification groups in the video library include video classification groups corresponding to the plurality of video type identifiers. That is, the video library includes a plurality of video classification groups, and each video classification group includes videos having the same video type identifier.
Optionally, there is no intersection between any two video classification groups in the plurality of video classification groups.
Illustratively, the correspondence between the video type identifier and the video classification group is shown in table one. The video type identification comprises three types of landscape identification, dance identification and singing identification, a video library of the music platform comprises three video classification groups, namely a first video classification group corresponding to the landscape identification, and the first video classification group comprises a plurality of videos with the landscape identification; a second video classification group corresponding to the dance identifier, wherein the second video classification group comprises a plurality of videos with dance identifiers; and the third video classification group corresponds to the singing identification, and the third video classification group comprises a plurality of videos with the singing identification.
Watch 1
Video type identification | Video frequency classified group |
Landscape sign | First video classification group |
Dance sign | Second video classification group |
Singing mark | Third video classification group |
The music platform determines the video type identification of the target video, including but not limited to the following two possible implementations:
in one possible implementation manner, the music platform determines the video type identifier of the target video according to the video content of the target video.
And the music platform analyzes the video content of the target video to obtain the video type identification of the target video.
Optionally, the music platform acquires a video recognition model, inputs the target video into the video recognition model, and outputs the video type identifier of the target video. The video identification model is used for representing a video classification rule obtained by training based on a sample video.
Optionally, the video recognition model is obtained by training according to at least one group of sample data sets, where each group of sample data sets includes: sample video and correct video type identification marked in advance.
Optionally, the music platform acquires a video recognition model, including: acquiring a training sample set, wherein the training sample set comprises the at least one group of sample data groups; and training the original parameter model by adopting an error back propagation algorithm according to the at least one group of sample data set to obtain a video identification model.
Illustratively, video recognition models include, but are not limited to: at least one of a Deep Neural Network (DNN) model, a Recurrent Neural Network (RNN) model, an embedding (embedding) model, a Gradient Boosting Decision Tree (GBDT) model, and a Logistic Regression (LR) model.
In another possible implementation manner, the music platform obtains the video type identifier of the target video from the video upload request of the first terminal forwarded by the video platform.
Optionally, the first terminal determines a video type identifier of the target video according to the video content of the target video, and sends a video upload request to the video platform, where the video upload request carries the target video and the video type identifier of the target video. The video platform receives a video uploading request sent by the first terminal, sends the video uploading request to the music platform, and correspondingly, the music platform obtains the video type identification of the target video from the received video uploading request.
It should be noted that, the process of determining the video type identifier of the target video by the first terminal according to the video content of the target video may be similar to the process of determining the relevant details of the video type identifier by referring to the music platform, and details are not repeated here.
Step 404, the second terminal sends an audio playing request to the music platform, where the audio playing request is used to request to start playing the target audio.
And the second terminal sends an audio playing request to the music platform through the audio application, wherein the audio playing request is used for requesting to start playing the target audio. Optionally, the audio playing request carries an audio identifier of the target audio.
Step 405, the music platform receives an audio playing request sent by the second terminal.
Correspondingly, the music platform receives an audio playing request sent by the second terminal, and acquires the audio identifier of the target audio from the audio playing request.
In step 406, the music platform obtains a plurality of candidate videos matched with the target audio, where the plurality of candidate videos includes the target video.
It should be noted that the videos (including the target video and the candidate video) referred to in the embodiments of the present application are all short videos, and the short videos are video segments with video duration less than a duration threshold.
The music platform obtains a plurality of candidate videos matching the target audio, including but not limited to the following possible implementations:
in one possible implementation manner, the music platform acquires an audio tag of the target audio, wherein the audio tag is used for indicating the type of the target audio; acquiring a target video classification group corresponding to the audio label according to a first corresponding relation, wherein the first corresponding relation comprises the corresponding relation between the audio label and the video classification group; a plurality of candidate videos are obtained from the target video classification group.
The audio tag of the target audio may be determined by the music platform according to the audio content of the target audio, or may be carried in an audio playing request sent by the second terminal and received by the music platform. This embodiment is not limited thereto.
The audio tag is used to indicate the audio type that uniquely indicates the audio. In one possible implementation, the audio is classified according to different audio materials, for example, the audio type includes at least one of a landscape type, a dance type and a singing type. In another possible implementation, the audio is classified according to different audio tempos, for example, the audio types include a fast tempo type and a slow tempo type. The present embodiment does not limit the audio classification method.
It should be noted that, in the embodiment of the present application, the audio classification manner may be the same as or different from the video classification manner. For convenience of description, the following description will be given only by taking the case where the audio type and the video type are the same, that is, the audio type and the video type include three types, namely, a landscape type, a dance type, and a singing type.
Illustratively, the correspondence between the audio tags and the video classification groups is shown in table two. The audio tags comprise three types of landscape tags, dance tags and singing tags, a target video classification group matched with the landscape tags is a first video classification group, and the first video classification group comprises a plurality of videos with landscape identifications; the target video classification group matched with the dance label is a second video classification group, and the second video classification group comprises a plurality of videos with dance marks; and the target video classification group matched with the singing label is a third video classification group, and the third video classification group comprises a plurality of videos with singing identifications.
Watch two
Audio frequency label | Video frequency classified group |
Landscape sign | First video classification group |
Dance sign | Second video classification group |
Singing mark | Third video classification group |
Optionally, the music platform randomly obtains a plurality of candidate videos from the target video classification group, and a sum of first playing durations of the candidate videos is less than or equal to a playing duration of the target audio.
In another possible implementation manner, the target audio includes a plurality of audio pieces, and the music platform acquires a plurality of candidate videos matching the target audio, including: and acquiring first candidate videos corresponding to the plurality of audio clips from the video library according to a third corresponding relation, wherein the third corresponding relation comprises the corresponding relation between the audio clips and the first candidate videos.
Optionally, before the music platform receives the audio playing request sent by the second terminal, the music platform performs fragmentation processing on the target audio in advance. In order to ensure the integrity of lyrics in a plurality of audio frequency fragments obtained after fragmentation processing, the music platform performs fragmentation processing on the target audio frequency according to the beat characteristics of the target audio frequency to obtain a plurality of audio frequency fragments.
Wherein the Beat feature is used to indicate the corresponding Beats Per Minute (BPM) of the video.
Optionally, a candidate video set corresponding to each of a plurality of audio segments of the target audio is stored in the music platform, and the candidate video set corresponding to each audio segment includes at least one candidate video. The first candidate video corresponding to the audio clip is any one of the candidate video sets corresponding to the audio clip.
Illustratively, the target audio is a song "AA", the song "AA" includes 5 audio clips, the playing time of each audio clip is 30 seconds, and the correspondence between the 5 audio clips and the plurality of candidate video sets is shown in table three. A first candidate video set corresponding to segment 1, the first candidate video set including "video S001, video S025, video S067, video S091, video S101, video S134, video S175"; a second candidate video set corresponding to the segment 2, where the second candidate video set includes "video S010, video S106"; a third candidate video set corresponding to the segment 3, where the third candidate video set includes "video S003, video S012, video S050, video S079, and video S111"; a fourth candidate video set corresponding to the segment 4, where the fourth candidate video set includes "video S007, video S088"; and a fifth candidate video set corresponding to the segment 5, wherein the fifth candidate video set comprises video S008, video S053, video S099, video S190 and video S351.
Watch III
Optionally, when the third playing time length of the first candidate video is longer than the playing time length of the corresponding audio clip, the music platform clips the first candidate video with the third playing time length to the first candidate video with a fourth playing time length, where the fourth playing time length is less than or equal to the playing time length of the corresponding audio clip.
Optionally, the music platform determines an absolute value of a difference between a third playing time length of the first candidate video and a playing time length of the corresponding audio segment, and the music platform clips a segment of the first candidate video with the playing time length having the absolute value of the difference from a video head or a video tail of the first candidate video to obtain the first candidate video with a fourth playing time length.
Illustratively, the third playing time length of the first candidate video is 40 seconds, the playing time length of the audio clip is 30 seconds, the absolute value of the difference is determined to be 10 by the music platform, the playing time length of the first candidate video is cut out for 10 seconds, the first candidate video with the fourth playing time length of "30 seconds" is obtained, and the subsequent step 407 is executed.
Step 407, the music platform sends a picture playing instruction to the second terminal.
The picture playing instruction is further used for instructing the audio application to play the picture information corresponding to each of the plurality of candidate videos when playing the target audio.
Optionally, when the sum of the first playing durations of the multiple candidate videos is less than or equal to the playing duration of the target audio, the music platform sends a first picture playing instruction to the second terminal, where the first picture playing instruction is used to instruct to sequentially play picture information corresponding to each of the multiple candidate videos in the process of playing the target audio.
Optionally, when the sum of the first playing time lengths of the multiple candidate videos is greater than the playing time length of the target audio, the music platform sends a picture playing instruction to the second terminal, which includes but is not limited to the following two possible implementation manners:
in a possible implementation manner, the music platform clips a plurality of candidate videos with the sum of the first playing time lengths to a plurality of candidate videos with the sum of the second playing time lengths, and sends a second picture playing instruction to the second terminal.
When the sum of the first playing time lengths of the candidate videos is larger than the playing time length of the target audio, the music platform firstly cuts the candidate videos, namely cuts the candidate videos with the sum of the first playing time lengths into the candidate videos with the sum of the second playing time lengths. And the sum of the second playing time lengths is less than or equal to the playing time length of the target audio.
Optionally, the music platform determines a difference absolute value between a sum of first playing durations of the multiple candidate videos and a playing duration of the target audio, and the music platform cuts out one or more candidate videos of the multiple candidate videos to cut out segments of the playing durations with the difference absolute value, so as to obtain multiple candidate videos with a sum of second playing durations.
Optionally, the second picture playing instruction is used to instruct that picture information corresponding to each of the plurality of candidate videos having the sum of the second playing time lengths is sequentially played in the process of playing the target audio.
In another possible implementation manner, the music platform sends a third image playing instruction to the second terminal, where the third image playing instruction is used to instruct to sequentially play image information corresponding to each of the multiple candidate videos in the process of playing the target audio, and the playing of the image information is stopped when the playing of the target audio is finished.
Since the sum of the first playing time lengths of the plurality of candidate videos is greater than the playing time length of the target audio, that is, the picture information continues to be played when the playing of the target audio is finished, the music platform may instruct to stop playing the picture information when the playing of the target audio is finished.
It should be noted that, when the manner in which the terminal acquires the multiple candidate videos that match the target audio is the second possible implementation manner corresponding to step 403, the sending, by the music platform, the picture playing instruction to the second terminal includes: and sending a fourth picture playing instruction to the second terminal, wherein the fourth picture playing instruction is used for instructing the audio application to play picture information corresponding to each of the plurality of first candidate videos when the target audio is played.
Optionally, the music platform sends a fourth image playing instruction to the second terminal, where the fourth image playing instruction is further used to instruct to sequentially play image information corresponding to each of the plurality of first candidate videos when the target audio is played.
And step 408, the second terminal plays the picture information corresponding to each of the plurality of candidate videos in the process of playing the target audio through the audio application according to the received picture playing instruction.
Optionally, when the second terminal receives the first image playing instruction, the image information corresponding to each of the plurality of candidate videos is sequentially played in the process of playing the target audio through the audio application.
Optionally, when the second terminal receives the second image playing instruction, the image information corresponding to each of the plurality of candidate videos having the sum of the second playing time lengths is sequentially played through the audio application in the process of playing the target audio.
Optionally, when the second terminal receives the third image playing instruction, the image information corresponding to each of the plurality of candidate videos is sequentially played in the process of playing the target audio through the audio application, and the playing of the image information is stopped when the playing of the target audio is finished.
Optionally, when the second terminal receives the fourth image playing instruction, the image information corresponding to each of the plurality of first candidate videos is played through the audio application while the target audio is played.
Optionally, after the second terminal receives the fourth image playing instruction, when the second terminal receives an image switching operation signal corresponding to the first target audio clip, the current target playing time point of the target audio is obtained; and sending a picture switching request to the music platform, wherein the picture switching request carries the target playing time point. Correspondingly, the music platform receives a picture switching request sent by a second terminal, and acquires a second candidate video corresponding to a first target audio clip where a target playing time point is located; and sending a picture switching instruction to the second terminal. And the second terminal starts to switch and play the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip through the audio application according to the received picture switching instruction.
Optionally, the picture switching request carries a target playing time point of the target audio when the second terminal receives the picture switching operation signal.
Optionally, the music platform receives a picture switching request sent by the second terminal, acquires a target playing time point from the picture switching request, determines a first target audio clip in which the target playing time point is located among a plurality of audio clips of the target audio, acquires a candidate video set corresponding to the first target audio clip, and determines any one of other candidate videos except the first candidate video in the candidate video set as a second candidate video.
Optionally, the music platform sends a picture switching instruction to the second terminal, where the picture switching instruction is used to instruct the audio application to start switching to play the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip.
Optionally, the playing time length of the audio clip is the same as the playing time length of the corresponding candidate video, and the audio clip and the corresponding candidate video are played synchronously, that is, there is a corresponding relationship between the playing time point of the audio clip and the playing time point of the candidate video. Therefore, when the second terminal receives the picture switching instruction sent by the music platform, the second terminal determines a target playing time point of the second candidate video corresponding to the target playing time point of the target audio, the second terminal switches to play the picture information of the second candidate video while continuing to play the first target audio clip at the target playing time point through the audio application, and the picture information of the second candidate video switched to play is the picture information starting from the target playing time point of the second candidate video.
In summary, the embodiment of the present application further obtains a plurality of candidate videos matched with the target audio through the music platform, where the plurality of candidate videos include the target video; and sending a picture playing instruction to the second terminal, so that the second terminal plays the picture information corresponding to the candidate videos when playing the target audio through the audio application according to the received picture playing instruction, and the technical effect of synchronously playing the target audio and the picture information of the candidate videos in the music platform is achieved.
The method comprises the steps that a picture switching request sent by a second terminal is received, wherein the picture switching request carries a target playing time point of a target audio when the second terminal receives a picture switching operation signal; acquiring a second candidate video corresponding to a first target audio clip where a target playing time point is located; sending a picture switching instruction to a second terminal, wherein the picture switching instruction is used for indicating the audio application to start switching and playing the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip; the situation that the second candidate video needs to be played from the beginning when the pictures are switched and played is avoided, so that the second terminal can continue to play the picture information of the switched second candidate video at the target playing time point, and the playing effect of the target audio is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 5, a schematic structural diagram of a video content processing apparatus according to an exemplary embodiment of the present application is shown. The video content processing apparatus can be implemented by a dedicated hardware circuit, or a combination of hardware and software, as all or a part of the music platform in fig. 1, and includes: a first receiving module 510, a second receiving module 520 and a playing instruction transmitting module 530.
The first receiving module 510 is configured to receive a target video sent by a video platform, where the target video is a video uploaded to the video platform by a first terminal, and the video platform is configured to instruct a video application to play picture information and audio information of the target video when the target video is played;
a second receiving module 520, configured to receive an audio playing request sent by a second terminal, where the audio playing request is used to request to start playing a target audio;
the playing instruction sending module 530 is configured to send a picture playing instruction to the second terminal according to the target video, where the picture playing instruction is used to instruct the audio application to play the picture information of the target video when playing the target audio.
Optionally, the playing instruction sending module 530 further includes an obtaining unit and a sending unit.
The device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of candidate videos matched with a target audio, and the candidate videos comprise the target video;
and the sending unit is used for sending a picture playing instruction to the second terminal, and the picture playing instruction is also used for indicating that picture information corresponding to each of the candidate videos is played when the audio application plays the target audio.
Optionally, the obtaining unit is further configured to obtain an audio tag of the target audio, where the audio tag is used to indicate a type of the target audio; acquiring a target video classification group corresponding to the audio label according to a first corresponding relation, wherein the first corresponding relation comprises the corresponding relation between the audio label and the video classification group; a plurality of candidate videos are obtained from the target video classification group.
Optionally, the sum of the first playing durations of the multiple candidate videos is less than or equal to the playing duration of the target audio, and the sending unit is further configured to send a first picture playing instruction to the second terminal, where the first picture playing instruction is used to instruct to sequentially play picture information corresponding to each of the multiple candidate videos in the process of playing the target audio.
Optionally, the sum of the first playing time lengths of the multiple candidate videos is greater than the playing time length of the target audio, the sending unit is further configured to clip the multiple candidate videos with the sum of the first playing time lengths to the multiple candidate videos with the sum of the second playing time lengths, and send a second picture playing instruction to the second terminal, where the second picture playing instruction is used to instruct that picture information corresponding to each of the multiple candidate videos with the sum of the second playing time lengths is sequentially played in the process of playing the target audio, and the sum of the second playing time lengths is less than or equal to the playing time length of the target audio;
or,
and sending a third picture playing instruction to the second terminal, wherein the third picture playing instruction is used for indicating that picture information corresponding to the candidate videos is played in sequence in the process of playing the target audio, and stopping playing the picture information when the target audio is played.
Optionally, the apparatus further comprises: the storage module is used for determining the video type identifier of the target video according to the video content of the target video; or acquiring a video type identifier of a target video from a video uploading request of a first terminal forwarded by a video platform;
acquiring a target video classification group corresponding to the video type identifier according to a second corresponding relation, wherein the second corresponding relation comprises the corresponding relation between the video type identifier and the video classification group;
the target video is stored in the target video classification group.
Optionally, the target audio includes a plurality of audio segments, and the sending unit is further configured to obtain, from the video library, first candidate videos corresponding to the plurality of audio segments according to a third correspondence, where the third correspondence includes a correspondence between the audio segment and the first candidate video;
and sending a fourth picture playing instruction to the second terminal, wherein the fourth picture playing instruction is used for instructing the audio application to play picture information corresponding to each of the plurality of first candidate videos when the target audio is played.
Optionally, the apparatus further includes: the switching instruction sending module is used for receiving a picture switching request sent by the second terminal, wherein the picture switching request carries a target playing time point of a target audio when the second terminal receives a picture switching operation signal;
acquiring a second candidate video corresponding to a first target audio clip where a target playing time point is located;
and sending a picture switching instruction to the second terminal, wherein the picture switching instruction is used for instructing the audio application to start switching and playing the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip.
Optionally, the third playing time length of the first candidate video is longer than the playing time length of the corresponding audio clip, and the apparatus further includes: and the clipping module is used for clipping the first candidate video with the third playing time length to the first candidate video with the fourth playing time length, and the fourth playing time length is less than or equal to the playing time length of the corresponding audio clip.
Optionally, the device further includes a fragment processing module, where the fragment processing module is configured to perform fragment processing on the target audio according to the beat feature of the target audio to obtain a plurality of audio segments.
The relevant details may be combined with the method embodiments described with reference to fig. 2-4. The first receiving module 510 and the second receiving module 520 are further configured to implement any other implicit or disclosed functions related to the receiving step in the foregoing method embodiments; the play instruction sending module 530 is further configured to implement any other implicit or disclosed functions related to the sending step in the foregoing method embodiment.
Referring to fig. 6, a schematic structural diagram of a video content processing apparatus according to an exemplary embodiment of the present application is shown. The video content processing apparatus can be implemented by a dedicated hardware circuit, or a combination of hardware and software, as all or a part of the music platform in fig. 1, and includes: a first receiving module 610, a first transmitting module 620, a second receiving module 630 and a second transmitting module 640.
A first receiving module 610, configured to receive a target video uploaded by a first terminal;
the first sending module 620 is configured to send the target video to a video platform, where the video platform is configured to instruct a video application to play picture information and audio information of the target video when playing the target video;
a second receiving module 630, configured to receive an audio playing request sent by a second terminal, where the audio playing request is used to request to start playing a target audio;
the second sending module 640 is configured to send a picture playing instruction to the second terminal according to the target video, where the picture playing instruction is used to instruct the audio application to play the picture information of the target video when playing the target audio.
The relevant details may be combined with the method embodiments described with reference to fig. 2-4. The first receiving module 610 and the second receiving module 630 are further configured to implement any other implicit or disclosed functions related to the receiving step in the foregoing method embodiments; the first sending module 620 and the second sending module 640 are further configured to implement any other implicit or disclosed functionality related to the sending step in the above method embodiments.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 7 is a block diagram illustrating a terminal 700 according to an exemplary embodiment of the present invention. The terminal 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the video content processing method provided by the method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic position of the terminal 700 to implement navigation or LBS (location based Service). The positioning component 708 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of terminal 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Referring to fig. 8, a schematic structural diagram of a server 800 according to an exemplary embodiment of the present application is shown. Specifically, the method comprises the following steps: the server 800 includes a Central Processing Unit (CPU)801, a system memory 804 including a Random Access Memory (RAM)802 and a Read Only Memory (ROM)803, and a system bus 805 connecting the system memory 804 and the central processing unit 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein the display 808 and the input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or CD-ROI drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.
The server 800 may also operate as a remote computer connected to a network via a network, such as the internet, according to various embodiments of the present application. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 811.
Optionally, the memory stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction, the at least one program, code set, or instruction set is loaded and executed by the processor to implement the video content processing method provided by the above-mentioned method embodiments.
The present application further provides a computer-readable storage medium, which stores at least one instruction for execution by a processor to implement the video content processing method provided by the above-mentioned method embodiments.
The present application also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the video content processing method described in the various embodiments above.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps in the video content processing method for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing associated hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc. The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (15)
1. A video content processing method for use in a music platform, the method comprising:
receiving a target video sent by a video platform, wherein the target video is a video uploaded to the video platform by a first terminal, and the video platform is used for indicating a video application to play picture information and audio information of the target video when playing the target video;
receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play picture information of the target video when playing the target audio.
2. The method according to claim 1, wherein the sending a picture playing instruction to the second terminal according to the target video comprises:
obtaining a plurality of candidate videos matched with the target audio, wherein the candidate videos comprise the target video;
and sending the picture playing instruction to the second terminal, wherein the picture playing instruction is further used for instructing the audio application to play picture information corresponding to the candidate videos when playing the target audio.
3. The method of claim 2, wherein the obtaining a plurality of candidate videos matching the target audio comprises:
acquiring an audio tag of the target audio, wherein the audio tag is used for indicating the type of the target audio;
acquiring a target video classification group corresponding to the audio label according to a first corresponding relation, wherein the first corresponding relation comprises the corresponding relation between the audio label and the video classification group;
obtaining the plurality of candidate videos from the target video classification group.
4. The method according to claim 3, wherein the sum of the first playing time lengths of the candidate videos is less than or equal to the playing time length of the target audio, and the sending the picture playing instruction to the second terminal comprises:
and sending a first picture playing instruction to the second terminal, wherein the first picture playing instruction is used for indicating that picture information corresponding to the candidate videos is played in sequence in the process of playing the target audio.
5. The method according to claim 3, wherein the sum of the first playing time lengths of the candidate videos is greater than the playing time length of the target audio, and the sending the picture playing instruction to the second terminal comprises:
clipping the candidate videos with the sum of the first playing time lengths to the candidate videos with the sum of the second playing time lengths, and sending a second picture playing instruction to the second terminal, wherein the second picture playing instruction is used for instructing to sequentially play picture information corresponding to the candidate videos with the sum of the second playing time lengths in the process of playing the target audio, and the sum of the second playing time lengths is less than or equal to the playing time length of the target audio;
or,
and sending a third picture playing instruction to the second terminal, wherein the third picture playing instruction is used for instructing picture information corresponding to the candidate videos to be played in sequence in the process of playing the target audio, and stopping playing the picture information when the target audio is played.
6. The method of claim 1, wherein after receiving the target video sent by the video platform, further comprising:
determining a video type identifier of the target video according to the video content of the target video; or, acquiring a video type identifier of the target video from a video uploading request of the first terminal forwarded by the video platform;
acquiring a target video classification group corresponding to the video type identifier according to a second corresponding relation, wherein the second corresponding relation comprises the corresponding relation between the video type identifier and the video classification group;
storing the target video in the target video classification group.
7. The method of claim 2, wherein the target audio comprises a plurality of audio segments, and wherein obtaining a plurality of candidate videos matching the target audio comprises:
acquiring first candidate videos corresponding to the plurality of audio clips from a video library according to a third corresponding relation, wherein the third corresponding relation comprises a corresponding relation between the audio clips and the first candidate videos;
the sending of the picture playing instruction to the second terminal includes:
and sending a fourth picture playing instruction to the second terminal, wherein the fourth picture playing instruction is used for instructing the audio application to play picture information corresponding to the plurality of first candidate videos when playing the target audio.
8. The method of claim 7, further comprising:
receiving a picture switching request sent by a second terminal, wherein the picture switching request carries a target playing time point of the target audio when the second terminal receives a picture switching operation signal;
acquiring a second candidate video corresponding to the first target audio clip where the target playing time point is located;
and sending a picture switching instruction to the second terminal, wherein the picture switching instruction is used for instructing the audio application to start switching and playing the picture information of the second candidate video at the target playing time point of the target audio in the process of playing the first target audio clip.
9. The method of claim 7, wherein the third playback time duration of the first candidate video is greater than the playback time duration of the corresponding audio clip, and wherein the method comprises:
clipping the first candidate video with the third playing time length to the first candidate video with a fourth playing time length, wherein the fourth playing time length is less than or equal to the playing time length of the corresponding audio clip.
10. The method of claim 7, wherein before the obtaining the plurality of candidate videos matching the target audio, further comprising:
and carrying out fragment processing on the target audio according to the beat characteristics of the target audio to obtain the plurality of audio fragments.
11. A video content processing method for use in a music platform, the method comprising:
receiving a target video uploaded by a first terminal;
sending the target video to a video platform, wherein the video platform is used for indicating a video application to play picture information and audio information of the target video when playing the target video;
receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play picture information of the target video when playing the target audio.
12. A video content processing apparatus, for use in a music platform, the apparatus comprising:
the first receiving module is used for receiving a target video sent by a video platform, wherein the target video is a video uploaded to the video platform by a first terminal, and the video platform is used for indicating a video application to play picture information and audio information of the target video when the target video is played;
the second receiving module is used for receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and the playing instruction sending module is used for sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play the picture information of the target video when playing the target audio.
13. A video content processing apparatus, for use in a music platform, the apparatus comprising:
the first receiving module is used for receiving a target video uploaded by a first terminal;
the first sending module is used for sending the target video to a video platform, and the video platform is used for indicating a video application to play picture information and audio information of the target video when playing the target video;
the second receiving module is used for receiving an audio playing request sent by a second terminal, wherein the audio playing request is used for requesting to start playing of a target audio;
and the second sending module is used for sending a picture playing instruction to the second terminal according to the target video, wherein the picture playing instruction is used for indicating an audio application to play the picture information of the target video when playing the target audio.
14. A server, comprising a processor and a memory; the memory stores at least one instruction for execution by the processor to implement the video content processing method of any of claims 1 to 11.
15. A computer-readable storage medium having stored thereon at least one instruction for execution by a processor to implement the video content processing method of any of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811575177.9A CN109640125B (en) | 2018-12-21 | 2018-12-21 | Video content processing method, device, server and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811575177.9A CN109640125B (en) | 2018-12-21 | 2018-12-21 | Video content processing method, device, server and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109640125A true CN109640125A (en) | 2019-04-16 |
CN109640125B CN109640125B (en) | 2021-04-27 |
Family
ID=66076518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811575177.9A Active CN109640125B (en) | 2018-12-21 | 2018-12-21 | Video content processing method, device, server and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109640125B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110248236A (en) * | 2019-07-02 | 2019-09-17 | 广州酷狗计算机科技有限公司 | Video broadcasting method, device, terminal and storage medium |
CN110324718A (en) * | 2019-08-05 | 2019-10-11 | 北京字节跳动网络技术有限公司 | Audio-video generation method, device, electronic equipment and readable medium |
CN111625682A (en) * | 2020-04-30 | 2020-09-04 | 腾讯音乐娱乐科技(深圳)有限公司 | Video generation method and device, computer equipment and storage medium |
CN111984111A (en) * | 2019-05-22 | 2020-11-24 | 中国移动通信有限公司研究院 | Multimedia processing method, device and communication equipment |
CN112528051A (en) * | 2019-09-19 | 2021-03-19 | 聚好看科技股份有限公司 | Singing work publishing method, display device and server |
CN113207022A (en) * | 2021-05-08 | 2021-08-03 | 广州酷狗计算机科技有限公司 | Video playing method and device, computer equipment and storage medium |
CN113453064A (en) * | 2021-06-18 | 2021-09-28 | 海信电子科技(武汉)有限公司 | Resource playing method and display equipment |
CN114501126A (en) * | 2021-12-25 | 2022-05-13 | 深圳市广和通无线股份有限公司 | Video playing method, system and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103491454A (en) * | 2013-09-30 | 2014-01-01 | 天脉聚源(北京)传媒科技有限公司 | Method, system and device for sharing audio and video resources |
CN104661095A (en) * | 2015-02-28 | 2015-05-27 | 深圳市中兴移动通信有限公司 | Audio and video data recommendation method and system |
CN104794117A (en) * | 2014-01-16 | 2015-07-22 | 腾讯科技(深圳)有限公司 | Picture data migration method and device |
CN104954828A (en) * | 2015-06-16 | 2015-09-30 | 朱捷 | Video data transmission method, device and system |
CN105049959A (en) * | 2015-07-08 | 2015-11-11 | 腾讯科技(深圳)有限公司 | Multimedia file playing method and device |
CN106303556A (en) * | 2016-08-05 | 2017-01-04 | 乐视控股(北京)有限公司 | Video resource call method, Apparatus and system |
US20170142458A1 (en) * | 2015-11-16 | 2017-05-18 | Goji Watanabe | System and method for online collaboration of synchronized audio and video data from multiple users through an online browser |
CN106713985A (en) * | 2016-12-27 | 2017-05-24 | 广州酷狗计算机科技有限公司 | Method and device for recommending network video |
CN107147946A (en) * | 2017-05-05 | 2017-09-08 | 广州华多网络科技有限公司 | A kind of method for processing video frequency and device |
CN107682721A (en) * | 2016-12-26 | 2018-02-09 | 腾讯科技(北京)有限公司 | The playing method and device of media file |
CN107734376A (en) * | 2017-10-16 | 2018-02-23 | 维沃移动通信有限公司 | The method and device that a kind of multi-medium data plays |
-
2018
- 2018-12-21 CN CN201811575177.9A patent/CN109640125B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103491454A (en) * | 2013-09-30 | 2014-01-01 | 天脉聚源(北京)传媒科技有限公司 | Method, system and device for sharing audio and video resources |
CN104794117A (en) * | 2014-01-16 | 2015-07-22 | 腾讯科技(深圳)有限公司 | Picture data migration method and device |
CN104661095A (en) * | 2015-02-28 | 2015-05-27 | 深圳市中兴移动通信有限公司 | Audio and video data recommendation method and system |
CN104954828A (en) * | 2015-06-16 | 2015-09-30 | 朱捷 | Video data transmission method, device and system |
CN105049959A (en) * | 2015-07-08 | 2015-11-11 | 腾讯科技(深圳)有限公司 | Multimedia file playing method and device |
US20170142458A1 (en) * | 2015-11-16 | 2017-05-18 | Goji Watanabe | System and method for online collaboration of synchronized audio and video data from multiple users through an online browser |
CN106303556A (en) * | 2016-08-05 | 2017-01-04 | 乐视控股(北京)有限公司 | Video resource call method, Apparatus and system |
CN107682721A (en) * | 2016-12-26 | 2018-02-09 | 腾讯科技(北京)有限公司 | The playing method and device of media file |
CN106713985A (en) * | 2016-12-27 | 2017-05-24 | 广州酷狗计算机科技有限公司 | Method and device for recommending network video |
CN107147946A (en) * | 2017-05-05 | 2017-09-08 | 广州华多网络科技有限公司 | A kind of method for processing video frequency and device |
CN107734376A (en) * | 2017-10-16 | 2018-02-23 | 维沃移动通信有限公司 | The method and device that a kind of multi-medium data plays |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111984111A (en) * | 2019-05-22 | 2020-11-24 | 中国移动通信有限公司研究院 | Multimedia processing method, device and communication equipment |
CN110248236A (en) * | 2019-07-02 | 2019-09-17 | 广州酷狗计算机科技有限公司 | Video broadcasting method, device, terminal and storage medium |
CN110324718B (en) * | 2019-08-05 | 2021-09-07 | 北京字节跳动网络技术有限公司 | Audio and video generation method and device, electronic equipment and readable medium |
CN110324718A (en) * | 2019-08-05 | 2019-10-11 | 北京字节跳动网络技术有限公司 | Audio-video generation method, device, electronic equipment and readable medium |
CN112528051A (en) * | 2019-09-19 | 2021-03-19 | 聚好看科技股份有限公司 | Singing work publishing method, display device and server |
CN112528051B (en) * | 2019-09-19 | 2022-10-14 | 聚好看科技股份有限公司 | Singing work publishing method, display device and server |
CN111625682A (en) * | 2020-04-30 | 2020-09-04 | 腾讯音乐娱乐科技(深圳)有限公司 | Video generation method and device, computer equipment and storage medium |
CN111625682B (en) * | 2020-04-30 | 2023-10-20 | 腾讯音乐娱乐科技(深圳)有限公司 | Video generation method, device, computer equipment and storage medium |
CN113207022A (en) * | 2021-05-08 | 2021-08-03 | 广州酷狗计算机科技有限公司 | Video playing method and device, computer equipment and storage medium |
CN113453064A (en) * | 2021-06-18 | 2021-09-28 | 海信电子科技(武汉)有限公司 | Resource playing method and display equipment |
CN113453064B (en) * | 2021-06-18 | 2023-02-24 | Vidaa(荷兰)国际控股有限公司 | Resource playing method and display equipment |
CN114501126A (en) * | 2021-12-25 | 2022-05-13 | 深圳市广和通无线股份有限公司 | Video playing method, system and storage medium |
CN114501126B (en) * | 2021-12-25 | 2024-03-15 | 深圳市广和通无线股份有限公司 | Video playing method, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109640125B (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109413342B (en) | Audio and video processing method and device, terminal and storage medium | |
CN109640125B (en) | Video content processing method, device, server and storage medium | |
CN109756784B (en) | Music playing method, device, terminal and storage medium | |
CN109379643B (en) | Video synthesis method, device, terminal and storage medium | |
CN111147878B (en) | Stream pushing method and device in live broadcast and computer storage medium | |
CN109040297B (en) | User portrait generation method and device | |
CN109151593B (en) | Anchor recommendation method, device and storage medium | |
CN109033335B (en) | Audio recording method, device, terminal and storage medium | |
CN108737897B (en) | Video playing method, device, equipment and storage medium | |
CN109922356B (en) | Video recommendation method and device and computer-readable storage medium | |
CN108848394A (en) | Net cast method, apparatus, terminal and storage medium | |
CN111083526B (en) | Video transition method and device, computer equipment and storage medium | |
CN111402844B (en) | Song chorus method, device and system | |
CN110933468A (en) | Playing method, playing device, electronic equipment and medium | |
CN111711838B (en) | Video switching method, device, terminal, server and storage medium | |
CN112541959A (en) | Virtual object display method, device, equipment and medium | |
CN109547843B (en) | Method and device for processing audio and video | |
CN109743461B (en) | Audio data processing method, device, terminal and storage medium | |
CN112165628A (en) | Live broadcast interaction method, device, equipment and storage medium | |
CN111625682A (en) | Video generation method and device, computer equipment and storage medium | |
CN111818367A (en) | Audio file playing method, device, terminal, server and storage medium | |
CN109547847B (en) | Method and device for adding video information and computer readable storage medium | |
CN108833970A (en) | Method, apparatus, computer equipment and the storage medium recorded is broadcast live | |
CN112559795A (en) | Song playing method, song recommending method, device and system | |
CN112511889A (en) | Video playing method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |