CN111641864A - Video information acquisition method, device and equipment - Google Patents

Video information acquisition method, device and equipment Download PDF

Info

Publication number
CN111641864A
CN111641864A CN201910157178.XA CN201910157178A CN111641864A CN 111641864 A CN111641864 A CN 111641864A CN 201910157178 A CN201910157178 A CN 201910157178A CN 111641864 A CN111641864 A CN 111641864A
Authority
CN
China
Prior art keywords
request information
target
data stream
source video
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910157178.XA
Other languages
Chinese (zh)
Other versions
CN111641864B (en
Inventor
左洪涛
刘阿海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910157178.XA priority Critical patent/CN111641864B/en
Publication of CN111641864A publication Critical patent/CN111641864A/en
Application granted granted Critical
Publication of CN111641864B publication Critical patent/CN111641864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method, a device and equipment for acquiring video information, wherein the method comprises the following steps: acquiring target video request information, wherein the target video request information comprises source video request information and target audio request information, and the target audio request information comprises target language request information; starting a first sub thread so that the first sub thread can send the source video request information and obtain a source video data stream; starting a second sub-thread so that the second sub-thread can send the source video request information and the target audio request information and obtain a target audio data stream; and generating a target video according to the source video data stream and the target audio data stream in real time. The invention can simultaneously process two paths of audio and video streams and reduce the pressure of background transcoding and storage.

Description

Video information acquisition method, device and equipment
Technical Field
The present invention relates to the field of video information processing technologies, and in particular, to a method, an apparatus, and a device for acquiring video information.
Background
At present, some international films such as hollywood films need to be shown in various countries, and based on different favorite languages of everyone, the international films generally need to have original audio videos and national language videos showing the countries, and the different playing times of the national languages and the original sounds need to be set so as to meet the requirements of users on videos of different languages. For internationalized online network media resources, if a player is required to support play switching of different languages, currently, a lot of schemes are used: the background deploys film sources of multiple languages, and when the user switches the played languages, the player can request different film sources from the background.
The scheme requires that a language film source side can provide multiple film sources to select an intention film source from the multiple film sources, or the intention film source is obtained by performing rotary packaging on an original audio video and audio data streams of native languages of various countries. However, the audience group of the international movie or the domestic online video player generally covers many or dozens of countries, and if the film sources in dozens of languages need to be re-encoded and encapsulated in the background, the complexity of the background is greatly increased; moreover, recoding and packaging of the user during language switching can bring delay of viewing effect and influence user experience; in addition, the storage of the corresponding language film source also increases the storage pressure of the background.
Therefore, it is necessary to provide a technical solution capable of efficiently acquiring video information, so that a user can flexibly select and enjoy video resources in a favorite language.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a video information acquisition method, a device and equipment, and specifically comprises the following steps:
in one aspect, a method for acquiring video information is provided, and the method includes:
acquiring target video request information, wherein the target video request information comprises source video request information and target audio request information, and the target audio request information comprises target language request information;
starting a first sub thread so that the first sub thread can send the source video request information and obtain a source video data stream;
starting a second sub-thread so that the second sub-thread can send the source video request information and the target audio request information and obtain a target audio data stream; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and generating a target video according to the source video data stream and the target audio data stream in real time.
In another aspect, a video information obtaining method is provided, and the method includes:
obtaining a source video data stream according to source video request information sent by the first sub thread;
obtaining a target audio data stream according to the source video request information and the target audio request information sent by the second sub thread; the target audio request information comprises target language request information; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and transmitting the source video data stream and the target audio data stream in real time to obtain a target video.
Another aspect provides a video information acquisition apparatus, including:
the system comprises a target video request information obtaining module, a target video request information obtaining module and a target audio request information obtaining module, wherein the target video request information obtaining module is used for obtaining target video request information, the target video request information comprises source video request information and target audio request information, and the target audio request information comprises target language request information;
the source video data stream acquisition module is used for starting a first sub thread so as to facilitate the first sub thread to send the source video request information and acquire a source video data stream;
the target audio data stream acquisition module is used for starting a second sub thread so as to facilitate the second sub thread to send the source video request information and the target audio request information and acquire a target audio data stream; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and the target video generation module is used for generating a target video according to the source video data stream and the target audio data stream in real time.
Another aspect provides a video information acquisition apparatus, including:
the source video data stream obtaining module is used for obtaining a source video data stream according to source video request information sent by the first sub thread;
the target audio data stream obtaining module is used for obtaining a target audio data stream according to the source video request information and the target audio request information sent by the second sub thread; the target audio request information comprises target language request information; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and the target video obtaining module is used for sending the source video data stream and the target audio data stream to obtain a target video.
Another aspect provides an apparatus, comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video information acquisition method according to any one of the above aspects.
The video information acquisition method, the device and the equipment provided by the invention have the beneficial effects that:
according to the invention, a first sub thread is started so that the first sub thread can send the source video request information to obtain a source video data stream; starting a second sub-thread so that the second sub-thread can send the source video request information and the target audio request information and obtain a target audio data stream; generating a target video according to the source video data stream and the target audio data stream in real time; the target audio data stream is uniquely determined according to the source video request information and the target language request information; and the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold. The invention can process two paths of audio and video streams, reduces the pressure of background transcoding and storage, and enables users to efficiently and flexibly select and appreciate videos in the favorite languages.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by embodiments of the present description;
fig. 2 is a flowchart of a video information obtaining method provided by an embodiment of the present specification;
fig. 3 is a flowchart of steps provided by an embodiment of the present specification for sending target video request information to obtain video related information;
FIG. 4 is a flowchart illustrating steps provided by an embodiment of the present specification to determine a target audio storage address corresponding to the target audio request information;
FIG. 5 is a schematic diagram of an interface display provided by an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating steps provided by an embodiment of the present specification to obtain the target video according to the source video data stream and the target audio;
fig. 7 is a flowchart of another video information acquisition method provided by an embodiment of the present specification;
fig. 8 is a timing diagram of a system-based video information acquisition method according to an embodiment of the present disclosure;
fig. 9 is a timing diagram of another system-based video information acquisition method provided by an embodiment of the present specification;
fig. 10 is a block diagram of a video information acquisition apparatus according to an embodiment of the present disclosure;
fig. 11 is a block diagram of another video information acquisition apparatus provided in an embodiment of the present specification;
fig. 12 is a schematic structural diagram of a video information acquisition apparatus provided in an embodiment of this specification.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Generally, system playing terminals are all system-implemented, cannot be reconstructed, and have great limitation, so that two paths of audio and video streams cannot be processed simultaneously; in addition, the stability of hard decoding MediaCodec of the existing platforms such as TV is poor, and the system player is preferentially supported, but the system player does not support on-line plug-in audio track and does not support cross-platform application. The playing terminal provided by the specification can process the original sound slice source and simultaneously use the other path to process the audio stream; the sound and picture synchronization effect is realized by keeping the two paths of playing time stamps synchronous; the method can flexibly adapt to different languages, and meets the requirement of a user on the video of the target audio; the complexity of background operation is reduced, and the storage pressure of the background is greatly reduced.
As shown in fig. 1, a schematic diagram of an implementation environment provided by the embodiments of the present description is shown. The implementation environment includes: a server 02 and a terminal 01 which communicates information with the server 02.
Terminal 01 may be a mobile phone, a tablet, a laptop portable acquirer, a PAD or desktop acquirer, etc. A client runs in the terminal 01, and the client can be any client with video information acquisition and output; for example, the client operating in the terminal 01 may be a video playing client, a browser, an instant messaging client, a shopping client, and the like. The server 02 may be a server, a server cluster composed of a plurality of servers, or a cloud acquisition service center. The server 02 establishes a communication connection with the terminal 01 through a network.
Specifically, an embodiment of the present specification provides a video information obtaining method, as shown in fig. 2, the method includes:
s202, target video request information is obtained, wherein the target video request information comprises source video request information and target audio request information, and the target audio request information comprises target language request information;
in this embodiment, if the method is applied to a scene where a playing terminal plays videos, when a user requests a video in a certain language, the playing terminal generates a corresponding target video request message. The target video comprises video resources selected by a user and audio resources matched and fused with the video resources; specifically, the source video request information corresponds to a video resource selected by a user; the target audio request information corresponds to an audio resource which can be matched and fused with the source video, and the target language request information corresponds to the language of the audio resource.
S204, starting a first sub thread so that the first sub thread can send the source video request information and obtain a source video data stream conveniently;
s206, starting a second sub thread so that the second sub thread can send the source video request information and the target audio request information and obtain a target audio data stream; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
the first sub thread and the second sub thread in this embodiment are threads initiated by a playing terminal, and the playing terminal may include a first player and a second player; in particular, the first player may be a system player and the second player may be a self-developed player. Accordingly, the first sub-thread in this embodiment may be a thread initiated by the first player, and the second sub-thread may be a thread initiated by the second player.
In this embodiment, in one line, the system player sends the source video request information to the background media server, and after receiving the source video request information, the background media server finds a corresponding source video data stream according to the source video request information, and further returns the source video data stream to the playing terminal. The target audio data stream is a target audio matched with the source video data stream, so that the request content of the source video request information is required in the process of determining the target audio data stream. Correspondingly, in another line, the self-research player sends the source video request information and the target audio request information to the background media server, and the background media server finds the corresponding target audio after receiving the source video request information and the target audio request information, and further returns the target audio to the playing terminal.
It should be noted that, if the preset time threshold is 500ms close to 0, the starting time difference between the first sub-thread and the second sub-thread is less than 500ms, and the first sub-thread and the second sub-thread can almost achieve the effect of simultaneously acquiring a source video data stream and a target video data stream, so that the efficiency of acquiring a target video can be improved; if the starting time difference between the first sub-thread and the second sub-thread is 0, the first sub-thread and the second sub-thread are executed simultaneously, so that two paths of request information are processed simultaneously, the processing efficiency is improved, and the pressures of background transcoding and storage are reduced.
In a possible embodiment, the method may further include:
s203, sending target video request information to obtain video associated information; the video associated information comprises a source video storage address and a target audio storage address;
in this embodiment, the source video storage address is used to represent a storage location of the source video data stream in the background media server; the target audio storage address is used for representing the storage position of the target audio in the background media server.
Specifically, a video information server acquires target video request information from a playing terminal, and further obtains video associated information according to the target video request information; the video information server further authenticates the target video request information to determine the legality or anti-counterfeiting property of the target video request information; in detail:
s1a, performing security verification on the target video request information;
and S1b, when the verification result of the target video request information is legal request information, acquiring the video associated information.
The performing security verification on the target video request information may include:
s2a, extracting public key encryption information from the target video request information;
s2b, obtaining plaintext information obtained by decrypting the public key encrypted information through a private key;
and S2c, when the plaintext information obtained by decryption is legal information, the target video request information is legal request information.
In this embodiment, the target video request information may be subjected to validity check in a key verification manner; specifically, the information encrypted by the public key is extracted from the target video request information, the encrypted information is decrypted by a private key of the server side to obtain plaintext information, and the target video request information can be considered as legal request information under the condition that the plaintext information obtained by decryption conforms to the program verification logic of the server.
Giving an example for legitimacy: for example, a certain logic exists between the server and the client, the client encrypts a number and sends the encrypted number to the server, the server decrypts the encrypted information to obtain plaintext information of the number, if the number is verified to belong to a preset certain interval, the logic between the server and the client is considered to be satisfied, and the request information corresponding to the number is legal.
Further, the step S204 of sending the source video request information and acquiring a source video data stream may include:
sending the source video request information and acquiring a source video data stream obtained according to the source video request information and the source video storage address;
in this embodiment, after receiving the source video request information, the background media server finds the source video data stream according to the source video request information and the source video storage address, and further returns the source video data stream to the playing terminal.
Further, the step S206 of sending the source video request information and the target audio request information and obtaining the target audio data stream may include:
and sending the source video request information and the target audio request information, and acquiring a target audio data stream obtained according to the source video request information, the target audio request information and the target audio storage address.
Similarly, in this embodiment, after receiving the source video request information and the target audio request information, the background media server finds the target audio data stream according to the source video request information, the target audio request information, and the target audio storage address, and further returns the target audio data stream to the play terminal.
In a specific embodiment, the step S203 of sending the target video request information to obtain the video related information, as shown in fig. 3, includes:
s402, sending the target video request information to obtain intermediate associated information, wherein the intermediate associated information comprises a source video storage address, an audio language supported by a source video data stream and a storage address of an audio language supported by the source video data stream;
s404, determining a target audio storage address corresponding to the target audio request information from the storage addresses of the audio languages supported by the source video data stream;
s406, obtaining the video associated information according to the source video storage address and the target audio storage address.
In this embodiment, the audio language supported by the source video data stream may be understood as an audio language that can be rendered with the source video data stream to generate a new video data stream. And the storage address of the audio language is used for representing the storage position of the audio language in the background media server.
Specifically, the target audio corresponding to the user's intention in the target audio request information is used to find the storage location of the target audio at the background media server by determining the storage address of the target audio. And meanwhile, according to the source video storage address, the storage position of the source video data stream at a background media server end is found.
In a specific implementation manner, the step S404 determines a target audio storage address corresponding to the target audio request information, as shown in fig. 4, and previously includes:
s602, acquiring the number of audio languages supported by the source video data stream;
s604, when the number of the audio languages supported by the source video data stream is multiple, judging whether the audio languages contain a target language corresponding to the target language request information;
and S606, when the language comprises a target language corresponding to the target language request information, determining a target audio storage address.
In this embodiment, the language type supported by the source video data stream is detected to obtain the language corresponding to the target audio request information, so as to determine the target audio storage address. The embodiment can ensure the existence of the language supported by the source video data stream, thereby ensuring that the target video is finally obtained and meeting the requirements of users.
It should be noted that in this embodiment, after the video associated information is obtained, a source video storage address is extracted from the video associated information, and when a system player initiates a video request to a background media server, a corresponding source video data stream is obtained according to the source video storage address. And when a system player initiates a video request to the background media server, if the source video data stream is determined to support multiple languages and a target audio storage address is extracted, the self-research player requests the background media server for the audio data of the target audio.
And S208, generating a target video in real time according to the source video data stream and the target audio data stream.
In this embodiment, after obtaining the languages of the source video data stream and the target audio, the playing terminal performs rendering processing on the source video data stream and the target audio to obtain the target video, as shown in an interface display diagram shown in fig. 5.
In a possible implementation, the step S208 generates the target video from the source video data stream and the target audio data stream in real time, as shown in fig. 6, which may include:
s802, acquiring a first playing time stamp and a second playing time stamp; the first playing time stamp is the playing time stamp of the source video data stream, and the second playing time stamp is the playing time stamp of the target audio data stream;
s804, adjusting the second playing time stamp to enable the second playing time stamp to be synchronous with the first playing time stamp;
and S806, obtaining the target video according to the source video data stream, the first playing time stamp, the target audio data stream and the second playing time stamp.
In this embodiment, when the data received in the path of the source video data stream and the data received in the path of the target audio data are both buffered to a state in which the data can be played, the playing terminal may start playing the target video. Specifically, the second playing timestamp is adjusted to synchronize the second playing timestamp with the first playing timestamp, which can be understood as that the playing timestamp of the system player is provided for the self-developed player in the packaging layer of the playing terminal, so that synchronization of two paths of data streams playing sound and pictures is realized, and a normal playing function is realized.
It should be noted that the source video data stream corresponding to the source video request information includes original video image information and initial language information; the initial language information may be a language used for making a file corresponding to the source video data stream, such as a language of a large hollywood english language; or the language supported by the local system, such as the Chinese language supported by the continental version of apple mobile phone. And when the language selection is not made by the user, defaulting that the target audio is the language supported by the local system.
The technical scheme for acquiring the video information provided by the specification can be compatible with the defect that a system player does not support a plug-in audio track; the user can flexibly select an intentional language or automatically adapt to a local language for playing; background processing and storage of a large number of videos are not needed, and the pressure of a background server is greatly reduced; the user requirements of different languages are met, the user experience is improved, and the user viscosity of the corresponding terminal equipment is improved.
An embodiment of the present specification provides a video information obtaining method, as shown in fig. 7, the method includes:
s1002, obtaining a source video data stream according to source video request information sent by a first sub thread;
s1004, obtaining a target audio data stream according to the source video request information and the target audio request information sent by the second sub thread; the target audio request information comprises target language request information; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and S1006, transmitting the source video data stream and the target audio data stream in real time to obtain a target video.
In this embodiment, the video information server obtains target video request information from a playing terminal, and further obtains video associated information according to the target video request information, and then, the video information server sends the video associated information to the playing terminal, so that the playing terminal obtains a source video data stream and a target audio from a background media server to generate the target video.
It should be noted that, in this embodiment, the content related to the play terminal and other interactions may refer to descriptions of other parts in the specification, and specific details are not described herein again.
The embodiment of the specification also provides a system, which comprises a play terminal Player, a video information Server Cgi Server and a background media Server MediaServer; the video information is acquired through the assistance of the playing terminal, the background media server and the video information server. As shown in fig. 8 to 8, this embodiment can obtain a video information obtaining method based on the system, including:
the playing terminal acquires the operation of a user on a video request and generates target video request information (belonging to a video request);
the playing terminal sends the target video request information to a video information server;
the video information server obtains video associated information (belonging to video response) according to the target video request information and returns the video associated information to the playing terminal;
the playing terminal sends source video request information (belonging to video request) in the target video request information to the background media server;
the background media server acquires a source video data stream (belonging to a video response) pointed by a source video storage address in the video associated information according to the source video request information; returning the source video data stream to the playing terminal;
meanwhile, the playing terminal sends the source video request information and the target audio request information (belonging to audiorequest) to the background media server;
the background media server acquires a target audio (belonging to audio response) pointed by a uniquely determined target audio storage address according to the source video request information and the target audio request information; returning the target audio to the playing terminal;
and the playing terminal renders the received source video data stream and the target audio to generate the target video.
The playing terminal in the embodiment includes a system player (systemplayer) and a self-developed player (selfplayer); specifically, as shown in fig. 9, the system player (system player) may send source video request information (belonging to a video request) in the target video request information to the background media server to obtain the source video data stream; meanwhile, the self-research player (self player) sends the target audio request information (belonging to the audio request) in the target video request information to the background media server to acquire the target video.
It should be noted that the present embodiment has the same inventive concept as the method embodiment, and specific details are not described herein again.
An embodiment of the present specification provides a video information acquiring apparatus, as shown in fig. 10, the apparatus includes:
a target video request information obtaining module 202, configured to obtain target video request information, where the target video request information includes source video request information and target audio request information, and the target audio request information includes target language request information;
a source video data stream obtaining module 204, configured to start a first sub-thread, so that the first sub-thread sends the source video request information and obtains a source video data stream;
a target audio data stream obtaining module 206, configured to start a second sub-thread, so that the second sub-thread sends the source video request information and the target audio request information and obtains a target audio data stream; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and the target video generation module 208 is configured to generate a target video according to the source video data stream and the target audio data stream in real time.
In a possible embodiment, the apparatus further comprises:
the video associated information obtaining module is used for sending target video request information to obtain video associated information; the video associated information comprises a source video storage address and a target audio storage address;
the source video data stream obtaining module comprises:
a source video data stream obtaining unit, configured to send the source video request information and obtain the source video data stream according to the source video request information and the source video storage address;
the target audio data stream obtaining module comprises:
and the target audio data stream acquiring unit is used for sending the source video request information and the target audio request information and acquiring the target audio data stream according to the source video request information, the target audio request information and the target audio storage address.
In a specific embodiment, the target video generation module may include:
a playing time stamp obtaining unit, configured to obtain a first playing time stamp and a second playing time stamp; the first playing time stamp is the playing time stamp of the source video data stream, and the second playing time stamp is the playing time stamp of the target audio data stream;
a playing time stamp adjusting unit, configured to adjust the second playing time stamp so that the second playing time stamp is synchronized with the first playing time stamp;
and the target video obtaining unit is used for obtaining the target video according to the source video data stream, the first playing time stamp, the target audio data stream and the second playing time stamp.
The video associated information obtaining module comprises:
an intermediate associated information obtaining unit, configured to send the target video request information to obtain intermediate associated information, where the intermediate associated information includes a source video storage address, an audio language supported by the source video data stream, and a storage address of an audio language supported by the source video data stream;
the target audio storage address determining unit is used for determining a target audio storage address corresponding to the target audio request information from the storage addresses of the audio languages supported by the source video data stream;
and the video associated information obtaining unit is used for obtaining the video associated information according to the source video storage address and the target audio storage address.
The video associated information obtaining module further includes: a target language determination unit, the target language determination unit comprising:
the language number determining subunit is used for acquiring the number of audio languages supported by the source video data stream;
a target language type determining subunit, configured to determine, when the number of audio language types supported by the source video data stream is multiple, whether the audio language type includes a target language type corresponding to the target language type request information;
and the target language determining subunit is used for determining a target audio storage address when the language comprises a target language corresponding to the target language request information.
It should be noted that the device embodiments and the corresponding method embodiments have the same inventive concept, and detailed description is omitted here.
An embodiment of the present specification provides a video information acquiring apparatus, as shown in fig. 11, the apparatus includes:
a source video data stream obtaining module 402, configured to obtain a source video data stream according to source video request information sent by the first sub-thread;
a target audio data stream obtaining module 404, configured to obtain a target audio data stream according to the source video request information and the target audio request information sent by the second sub-thread; the target audio request information comprises target language request information; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and a target video obtaining module 406, configured to send the source video data stream and the target audio data stream in real time to obtain a target video.
It should be noted that the device embodiments and the corresponding method embodiments have the same inventive concept, and detailed description is omitted here.
The present specification provides an apparatus including a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the video information acquisition method according to any one of the above embodiments.
Specifically, the present specification provides a schematic structural diagram of a video information acquisition device, as shown in fig. 12, a client may be installed on the device, and the device may be used to implement the video information acquisition method provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the device may include RF (Radio Frequency) circuitry 810, memory 820 including one or more computer-readable storage media, input unit 830, display unit 840, sensor 850, audio circuitry 860, WiFi (wireless fidelity) module 870, processor 880 including one or more processing cores, and power supply 890. Those skilled in the art will appreciate that the configuration of the device shown in fig. 12 is not intended to be limiting of the device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 810 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information from a base station and then processing the received downlink information by the one or more processors 880; in addition, data relating to uplink is transmitted to the base station. In general, RF circuitry 810 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (low noise amplifier), a duplexer, and the like. In addition, the RF circuit 810 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for mobile communications), GPRS (General Packet Radio Service), CDMA (code division Multiple Access), WCDMA (Wideband code division Multiple Access), LTE (Long Term Evolution), email, SMS (short messaging Service), etc.
The memory 820 may be used to store software programs and modules, and the processor 880 executes various functional applications and data processing by operating the software programs and modules stored in the memory 820. The memory 820 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 820 may also include a memory controller to provide the processor 880 and the input unit 830 access to the memory 820.
The input unit 830 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 830 may include a touch-sensitive surface 831 as well as other input devices 832. The touch-sensitive surface 831, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 831 (e.g., operations by a user on or near the touch-sensitive surface 831 using a finger, a stylus, or any other suitable object or attachment) and drive the corresponding connection device according to a predefined program. Alternatively, the touch-sensitive surface 831 can include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 880, and can receive and execute commands from the processor 880. In addition, the touch-sensitive surface 831 can be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 830 may include other input devices 832 in addition to the touch-sensitive surface 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 840 may be used to display information input by or provided to a user and various graphical user interfaces of the device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 840 may include a Display panel 841, and the Display panel 841 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like, as an option. Further, touch-sensitive surface 831 can overlay display panel 841 and, upon detecting a touch operation on or near touch-sensitive surface 831, communicate to processor 880 to determine the type of touch event, whereupon processor 880 can provide a corresponding visual output on display panel 841 in accordance with the type of touch event. Where touch-sensitive surface 831 and display panel 841 can be two separate components to implement input and output functions, touch-sensitive surface 831 can also be integrated with display panel 841 to implement input and output functions in some embodiments.
The device may also include at least one sensor 850, such as light sensors, motion sensors, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 841 based on the brightness of ambient light, and a proximity sensor that may turn off the display panel 841 and/or backlight when the device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the device is stationary, and can be used for applications of recognizing the device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured to the device, detailed description is omitted here.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between a user and the device. The audio circuit 860 can transmit the electrical signal converted from the received audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 and output; on the other hand, the microphone 862 converts collected sound signals into electrical signals, which are received by the audio circuit 860 and converted into audio data, which are then processed by the audio data output processor 880 and transmitted to, for example, another device via the RF circuit 810, or output to the memory 820 for further processing. The audio circuitry 860 may also include an earbud jack to provide communication of peripheral headphones with the device.
WiFi belongs to short-range wireless transmission technology, and the device can help users send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 870, and provides wireless broadband internet access for users. Although fig. 12 shows WiFi module 870, it is understood that it does not belong to the essential constitution of the device and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 880 is a control center of the apparatus, connects various parts of the entire apparatus using various interfaces and lines, performs various functions of the apparatus and processes data by operating or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby monitoring the entire apparatus. Optionally, processor 880 may include one or more processing cores; preferably, the processor 880 may integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 880.
The device also includes a power supply 890 (e.g., a battery) for powering the various components, which may be logically coupled to processor 880 via a power management system that may be used to manage charging, discharging, and power consumption. Power supply 890 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the device may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the display unit of the apparatus is a touch screen display, the apparatus further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors according to the instructions of the method embodiments of the present invention.
Embodiments of the present invention further provide a computer storage medium, where the storage medium may be disposed in a client to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a video information acquisition method in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions are loaded and executed by the processor to implement the video information acquisition method provided in the method embodiments.
Optionally, in this embodiment, the storage medium may be located in at least one network device of a plurality of network devices of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, which can store program codes.
It should be noted that: the sequence of the embodiments in this specification is merely for description, and does not represent the advantages or disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the acts or steps loaded in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for acquiring video information, the method comprising:
acquiring target video request information, wherein the target video request information comprises source video request information and target audio request information, and the target audio request information comprises target language request information;
starting a first sub thread so that the first sub thread can send the source video request information and obtain a source video data stream;
starting a second sub-thread so that the second sub-thread can send the source video request information and the target audio request information and obtain a target audio data stream; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and generating a target video according to the source video data stream and the target audio data stream in real time.
2. The method according to claim 1, further comprising:
sending target video request information to obtain video associated information; the video associated information comprises a source video storage address and a target audio storage address;
the sending the source video request information and obtaining a source video data stream includes:
sending the source video request information and acquiring a source video data stream obtained according to the source video request information and the source video storage address;
the sending the source video request information and the target audio request information and obtaining the target audio data stream includes:
and sending the source video request information and the target audio request information, and acquiring a target audio data stream obtained according to the source video request information, the target audio request information and the target audio storage address.
3. The method for acquiring video information according to claim 1, wherein the generating a target video from the source video data stream and a target audio data stream in real time comprises:
acquiring a first playing time stamp and a second playing time stamp; the first playing time stamp is the playing time stamp of the source video data stream, and the second playing time stamp is the playing time stamp of the target audio data stream;
adjusting the second playing time stamp to synchronize the second playing time stamp with the first playing time stamp;
and obtaining the target video according to the source video data stream, the first playing time stamp, the target audio data stream and the second playing time stamp.
4. The method according to claim 2, wherein said sending the target video request information to obtain the video associated information comprises:
sending the target video request information to obtain intermediate associated information, wherein the intermediate associated information comprises a source video storage address, an audio language supported by the source video data stream and a storage address of an audio language supported by the source video data stream;
determining a target audio storage address corresponding to the target audio request information from the storage addresses of the audio languages supported by the source video data stream;
and obtaining the video associated information according to the source video storage address and the target audio storage address.
5. The method according to claim 4, wherein the determining the target audio storage address corresponding to the target audio request information previously comprises:
acquiring the number of audio languages supported by the source video data stream;
when the number of the audio languages supported by the source video data stream is multiple, judging whether the audio languages contain a target language corresponding to the target language request information;
and when the language comprises a target language corresponding to the target language request information, determining a target audio storage address.
6. A method for acquiring video information, the method comprising:
obtaining a source video data stream according to source video request information sent by the first sub thread;
obtaining a target audio data stream according to the source video request information and the target audio request information sent by the second sub thread; the target audio request information comprises target language request information; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and transmitting the source video data stream and the target audio data stream in real time to obtain a target video.
7. A video information acquisition apparatus, characterized in that the apparatus comprises:
the system comprises a target video request information obtaining module, a target video request information obtaining module and a target audio request information obtaining module, wherein the target video request information obtaining module is used for obtaining target video request information, the target video request information comprises source video request information and target audio request information, and the target audio request information comprises target language request information;
the source video data stream acquisition module is used for starting a first sub thread so as to facilitate the first sub thread to send the source video request information and acquire a source video data stream;
the target audio data stream acquisition module is used for starting a second sub thread so as to facilitate the second sub thread to send the source video request information and the target audio request information and acquire a target audio data stream; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and the target video generation module is used for generating a target video according to the source video data stream and the target audio data stream in real time.
8. The video-information obtaining apparatus according to claim 7, wherein said apparatus further comprises:
the video associated information obtaining module is used for sending target video request information to obtain video associated information; the video associated information comprises a source video storage address and a target audio storage address;
the source video data stream obtaining module comprises:
a source video data stream obtaining unit, configured to send the source video request information and obtain a source video data stream obtained according to the source video request information and the source video storage address;
the target audio data stream obtaining module comprises:
and the target audio data stream acquiring unit is used for sending the source video request information and the target audio request information and acquiring a target audio data stream obtained according to the source video request information, the target audio request information and the target audio storage address.
9. A video information acquisition apparatus, characterized in that the apparatus comprises:
the source video data stream obtaining module is used for obtaining a source video data stream according to source video request information sent by the first sub thread;
the target audio data stream obtaining module is used for obtaining a target audio data stream according to the source video request information and the target audio request information sent by the second sub thread; the target audio request information comprises target language request information; the target audio data stream is uniquely determined according to the source video request information and the target language request information; the starting time difference between the first sub thread and the second sub thread is smaller than a preset time threshold;
and the target video obtaining module is used for sending the source video data stream and the target audio data stream in real time to obtain a target video.
10. An apparatus comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the video information acquisition method according to any one of claims 1 to 5 or the video information acquisition method according to claim 6.
CN201910157178.XA 2019-03-01 2019-03-01 Video information acquisition method, device and equipment Active CN111641864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910157178.XA CN111641864B (en) 2019-03-01 2019-03-01 Video information acquisition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910157178.XA CN111641864B (en) 2019-03-01 2019-03-01 Video information acquisition method, device and equipment

Publications (2)

Publication Number Publication Date
CN111641864A true CN111641864A (en) 2020-09-08
CN111641864B CN111641864B (en) 2022-05-20

Family

ID=72330522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910157178.XA Active CN111641864B (en) 2019-03-01 2019-03-01 Video information acquisition method, device and equipment

Country Status (1)

Country Link
CN (1) CN111641864B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411683A (en) * 2021-06-23 2021-09-17 北京奇艺世纪科技有限公司 Video playing method and device
CN113849686A (en) * 2021-09-13 2021-12-28 北京达佳互联信息技术有限公司 Video data acquisition method and device, electronic equipment and storage medium
WO2022188680A1 (en) * 2021-03-11 2022-09-15 海信视像科技股份有限公司 Display device and display method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2483222A (en) * 2010-08-24 2012-03-07 Extas Global Ltd Accessing a website by retrieving website data stored at separate storage locations
CN103905879A (en) * 2014-03-13 2014-07-02 北京奇艺世纪科技有限公司 Video data and audio data synchronized playing method and device and equipment
CN105025319A (en) * 2015-07-09 2015-11-04 无锡天脉聚源传媒科技有限公司 Video pushing method and device
CN105898501A (en) * 2015-12-30 2016-08-24 乐视致新电子科技(天津)有限公司 Video display method, video player and electronic device
CN106331753A (en) * 2015-06-30 2017-01-11 意法半导体国际有限公司 Synchronized rendering of split multimedia content on network clients
US20180032384A1 (en) * 2014-12-18 2018-02-01 Amazon Technologies, Inc. Secure script execution using sandboxed environments
CN109120976A (en) * 2018-10-09 2019-01-01 深圳市亿联智能有限公司 A method of the multilingual output of IPTV is supported using Web broadcast audio

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2483222A (en) * 2010-08-24 2012-03-07 Extas Global Ltd Accessing a website by retrieving website data stored at separate storage locations
CN103905879A (en) * 2014-03-13 2014-07-02 北京奇艺世纪科技有限公司 Video data and audio data synchronized playing method and device and equipment
US20180032384A1 (en) * 2014-12-18 2018-02-01 Amazon Technologies, Inc. Secure script execution using sandboxed environments
CN106331753A (en) * 2015-06-30 2017-01-11 意法半导体国际有限公司 Synchronized rendering of split multimedia content on network clients
CN105025319A (en) * 2015-07-09 2015-11-04 无锡天脉聚源传媒科技有限公司 Video pushing method and device
CN105898501A (en) * 2015-12-30 2016-08-24 乐视致新电子科技(天津)有限公司 Video display method, video player and electronic device
CN109120976A (en) * 2018-10-09 2019-01-01 深圳市亿联智能有限公司 A method of the multilingual output of IPTV is supported using Web broadcast audio

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋双: "多语种影片发行包的结构设计", 《现代电影技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022188680A1 (en) * 2021-03-11 2022-09-15 海信视像科技股份有限公司 Display device and display method
CN113411683A (en) * 2021-06-23 2021-09-17 北京奇艺世纪科技有限公司 Video playing method and device
CN113411683B (en) * 2021-06-23 2022-07-22 北京奇艺世纪科技有限公司 Video playing method and device
CN113849686A (en) * 2021-09-13 2021-12-28 北京达佳互联信息技术有限公司 Video data acquisition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111641864B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN106791892B (en) Method, device and system for live broadcasting of wheelhouses
US11153609B2 (en) Method and apparatus for live streaming
US9924205B2 (en) Video remote-commentary synchronization method and system, and terminal device
WO2017202348A1 (en) Video playing method and device, and computer storage medium
WO2017008627A1 (en) Multimedia live broadcast method, apparatus and system
CN103391473B (en) Method and device for providing and acquiring audio and video
WO2018192415A1 (en) Data live broadcast method, and related device and system
CN107333162B (en) Method and device for playing live video
US20130262687A1 (en) Connecting a mobile device as a remote control
CN111641864B (en) Video information acquisition method, device and equipment
CN111093108B (en) Sound and picture synchronization judgment method and device, terminal and computer readable storage medium
CN106776124B (en) Data backup method and device
US20140310741A1 (en) System for sharing data via cloud server and method thereof
CN113986167A (en) Screen projection control method and device, storage medium and display equipment
CN109194972B (en) Live stream acquisition method and device, computer equipment and storage medium
US9723486B2 (en) Method and apparatus for accessing network
WO2015131767A1 (en) Video processing method and apparatus
KR20110051351A (en) Method for controlling remote of portable terminal and system for the same
CN104935955A (en) Live video stream transmission method, device and system
WO2017215661A1 (en) Scenario-based sound effect control method and electronic device
CN106791916B (en) Method, device and system for recommending audio data
CN105704110B (en) Media transmission method, media control method and device
CN109495769B (en) Video communication method, terminal, smart television, server and storage medium
CN107948278B (en) Information transmission method, terminal equipment and system
CN110636337B (en) Video image intercepting method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant