CN108769770A - The method and apparatus for adjusting audio unit - Google Patents

The method and apparatus for adjusting audio unit Download PDF

Info

Publication number
CN108769770A
CN108769770A CN201810646008.3A CN201810646008A CN108769770A CN 108769770 A CN108769770 A CN 108769770A CN 201810646008 A CN201810646008 A CN 201810646008A CN 108769770 A CN108769770 A CN 108769770A
Authority
CN
China
Prior art keywords
audio unit
unit
time point
audio
broadcasting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810646008.3A
Other languages
Chinese (zh)
Inventor
王永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201810646008.3A priority Critical patent/CN108769770A/en
Publication of CN108769770A publication Critical patent/CN108769770A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • H04N21/26241Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the time of distribution, e.g. the best time of the day for inserting an advertisement or airing a children program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a kind of method and apparatus of adjustment audio unit, belong to network technique field.The method includes:Determine between the first audio unit and the second audio unit of target live video continuously received whether dropped audio unit, wherein first audio unit is the audio unit received before receiving second audio unit;If dropped audio unit between first audio unit and second audio unit, it is determined that the broadcasting end time point of first audio unit;Mute audio unit is added between first audio unit and second audio unit.Using the present invention, it can solve the problems, such as that video unit is nonsynchronous with audio unit.

Description

The method and apparatus for adjusting audio unit
Technical field
The present invention relates to network technique field, more particularly to a kind of method and apparatus of adjustment audio unit.
Background technology
Video web page player has many advantages, such as easily to share, installs video software without additional, therefore is becoming increasingly popular. Video web page player can have the function of viewing video and live streaming.
When user is by video web page player watching video live broadcast, due to unstable networks etc., may cause The phenomenon that video unit or audio unit of transmission are lost.When video unit occurring or audio unit loses event, due to lacking Partial video unit or audio unit have been lost, has caused the corresponding video unit of script and audio unit to become not corresponding, causes There is video unit and the nonsynchronous problem of audio unit.
Invention content
In order to solve problems in the prior art, an embodiment of the present invention provides a kind of methods and dress of adjustment audio unit It sets.The technical solution is as follows:
In a first aspect, a kind of method of adjustment audio unit is provided, the method includes:
Whether sound is lost between the first audio unit and the second audio unit of the determining target live video continuously received Frequency unit, wherein first audio unit is the audio unit received before receiving second audio unit;
If dropped audio unit between first audio unit and second audio unit, it is determined that described first The broadcasting end time point of audio unit;
Mute audio unit is added between first audio unit and second audio unit, wherein described quiet Sound audio unit plays the broadcasting end time point that start time point is first audio unit, the mute audio unit Not comprising sampled data.
Optionally, the first audio unit and the second audio unit for the target live video that the determination continuously receives Between whether dropped audio unit, including:
Calculate the broadcasting between the first audio unit and the second audio unit of the target live video continuously received The difference of start time point;
If the difference is more than the broadcasting duration of first audio unit, it is determined that first audio unit The dropped audio unit between second audio unit.
Optionally, the broadcasting end time point of the determination first audio unit, including:
Calculate obtained after the broadcasting start time point of first audio unit is added with the broadcasting duration when Between point, the time point is determined as to the broadcasting end time point of first audio unit.
Optionally, the method further includes:
When reaching the broadcasting start time point of the mute audio unit, start to play the mute audio unit;
When reaching the broadcasting start time point of second audio unit, terminate to play the mute audio unit, and Start to play second audio unit.
Optionally, the method further includes:
Obtain the broadcasting starting of the first video unit and the second video unit of the target live video continuously received Time point, wherein first video unit is the video unit received before receiving the second video unit;
Determine the broadcasting initial time of the broadcasting start time point and first video unit of second video unit Interval duration between point;
The broadcasting duration of first video unit is adjusted to the interval duration.
Second aspect, provides a kind of device of adjustment audio unit, and described device includes:
Determining module, for determine the first audio unit of target live video continuously received and the second audio unit it Between whether dropped audio unit, wherein first audio unit is the sound received before receiving second audio unit Frequency unit;
Determining module, if being additionally operable to dropped audio list between first audio unit and second audio unit Member, it is determined that the broadcasting end time point of first audio unit;
Add module, for adding mute audio list between first audio unit and second audio unit Member, wherein the mute audio unit plays the broadcasting end time point that start time point is first audio unit, institute It states mute audio unit and does not include sampled data.
Optionally, the determining module, is used for:
Calculate the broadcasting between the first audio unit and the second audio unit of the target live video continuously received The difference of start time point;
If the difference is more than the broadcasting duration of first audio unit, it is determined that first audio unit The dropped audio unit between second audio unit.
Optionally, the determining module, is used for:
Calculate obtained after the broadcasting start time point of first audio unit is added with the broadcasting duration when Between point, the time point is determined as to the broadcasting end time point of first audio unit.
Optionally, described device further includes:
Playing module, for when reaching the broadcasting start time point of the mute audio unit, starting to play described quiet Sound audio unit;
Playing module is additionally operable to, when reaching the broadcasting start time point of second audio unit, terminate described in broadcasting Mute audio unit, and start to play second audio unit.
Optionally, described device further includes:
Acquisition module, the first video unit and the second video list for obtaining the target live video continuously received The broadcasting start time point of member, wherein first video unit is the video list received before receiving the second video unit Member;
Determining module is additionally operable to determine the broadcasting start time point of second video unit and first video unit Play start time point between interval duration;
Module is adjusted, for the broadcasting duration of first video unit to be adjusted to the interval duration.
The third aspect provides a kind of terminal, and the terminal includes processor and memory, is stored in the memory At least one instruction, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, the institute Code set or instruction set is stated to be loaded by the processor and executed to realize the adjustment audio unit as described in above-mentioned first aspect Method.
Fourth aspect provides a kind of computer readable storage medium, at least one finger is stored in the storage medium Enable, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, the code set or Instruction set is loaded by the processor and is executed the method to realize the adjustment audio unit as described in above-mentioned first aspect.
The advantageous effect that technical solution provided in an embodiment of the present invention is brought includes at least:
In the embodiment of the present invention, when dropped audio between two audio units for determining the target live video continuously received When unit, make the first audio instead of the audio unit of loss by adding the mute audio unit that one does not include sampled data Unit and the second audio unit can continuously be played according to respective broadcasting start time point, be avoided due to dropped audio unit And video unit and audio unit is caused to become not corresponding problem, solve the video unit and audio unit for being possible to occur Nonsynchronous problem.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings Attached drawing.
Fig. 1 is a kind of flow chart of the method for adjustment audio unit provided in an embodiment of the present invention;
Fig. 2 is a kind of structure chart of adjustment audio unit provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of the device of adjustment audio unit provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of the device of adjustment audio unit provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of the device of adjustment audio unit provided in an embodiment of the present invention;
Fig. 6 is a kind of terminal structure schematic diagram provided in an embodiment of the present invention.
Specific implementation mode
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
An embodiment of the present invention provides a kind of method of adjustment audio unit, this method can be realized by terminal.Wherein, should Terminal is the terminal for being equipped with browser and can carrying out network data transmission by browser.
Terminal may include the components such as processor, memory, screen.Processor can be CPU (Central Processing Unit, central processing unit) etc., be determined between two audio units whether dropped audio unit, Determine the processing such as broadcasting end time point, the addition mute audio unit of audio unit.Memory can be RAM (Random Access Memory, random access memory), Flash (flash memory) etc. can be used for storing data, the processing procedure received The data etc. generated in required data, processing procedure, such as the first audio unit, the second audio unit, the first audio unit Play broadcasting start time point, the broadcasting duration of audio unit, the mute audio list of start time point, the second audio unit Member etc..Screen is displayed for live video and user interface etc..Terminal can also include transceiver, image detection portion Part, audio output part and audio input means etc..Transceiver can be used for carrying out data transmission with miscellaneous equipment, for example, connecing Audio unit and video unit that server is sent are received, may include antenna, match circuit, modem etc..Image detection Component can be camera etc..Audio output part can be speaker, earphone etc..Audio input means can be microphone etc..
As shown in Figure 1, the process flow of this method may include following step:
In a step 101, determine the first audio unit of target live video continuously received and the second audio unit it Between whether dropped audio unit.
Wherein, the first audio unit is the audio unit received before receiving the second audio unit.Audio unit is Web The audio container format that HTML5 browsers can be identified and be played, such as FMP4.
In one possible embodiment, HTML5 refers to the kernel language of WWW (i.e. Web), standard generalized markup language Under one application hypertext markup language (i.e. HTML) after the 5th material alteration determination application hypertext markup language Speech becomes the primary study object of current network development due to having the advantages that equipment compatibility, support more equipment cross-platform. Webpage live play device based on HTML5 is welcome by user very much due to the advantages of being not necessarily to additionally install live streaming application program.
It is directed to for audio, when user wants to watch live streaming by the webpage in terminal (can be described as viewing terminal), User opens the live streaming interface on webpage, and into corresponding live streaming room, the terminal to server that user uses sends live streaming and regards Frequency obtains request, which obtains in request and carry live streaming room identification.Server receives the live streaming of terminal transmission When video acquisition is asked, the live streaming room identification in request is obtained according to live video, determines the live video of the terminal request The device identification of corresponding terminal (can be described as live streaming terminal).Then, when live streaming terminal is corresponding broadcasts main be broadcast live, live streaming Terminal will be uploaded onto the server by the collected audio unit of audio input means.After server receives audio unit, point Not Que Ding each audio unit broadcasting duration, since format is identical, the broadcasting duration of each audio unit is identical.
Then, server is obtained according to the live video of viewing terminal and is asked, and is determined to audio unit and is played initial time Point.It should be noted that the broadcasting start time point of audio unit can not be actual time point, but according to audio unit The play time that redefines of playing sequence, asked for example, viewing terminal sends live video acquisition for the first time to server When asking, that is, before server sends the 1st audio unit to viewing terminal, server is by the broadcasting of the 1st audio unit Start time point is determined as 00:00:00.000, it is assumed that the broadcasting duration of each audio unit is 46ms, then server will The broadcasting start time point of 2nd audio unit is determined as 00:00:00.046, by the broadcasting initial time of the 3rd audio unit Point is determined as 00:00:00.138, and so on.
Viewing terminal receives broadcasting start time point and the broadcasting of the audio unit and audio unit of server transmission When duration, audio unit is added in Audio Buffer queue.If in Audio Buffer queue, there is audio unit (can Referred to as the first audio unit) come newly added audio unit (can be described as the second audio unit) front, then according to the first audio The broadcasting duration for playing start time point and audio unit for playing start time point, the second audio unit of unit, Detect between the first audio unit and the second audio unit whether dropped audio unit.
Optionally, between the first audio unit of above-mentioned detection and the second audio unit whether the method for dropped audio unit can With as follows:When calculating the broadcasting starting between the first audio unit and the second audio unit of the target live video continuously received Between the difference put;If difference is more than the broadcasting duration of the first audio unit, it is determined that the first audio unit and the second sound Dropped audio unit between frequency unit.
In one possible embodiment, viewing terminal calculates the broadcasting between the first audio unit and the second audio unit and rises The difference at time point beginning, obtained difference is compared with the broadcasting duration of audio unit, if obtained difference is big In the broadcasting duration of audio unit, illustrate to play the first audio unit according to the broadcasting start time point of the first audio unit Afterwards, before playing the second audio unit according to the broadcasting start time point of the second audio unit, it may appear that one section does not have audio The neutral gear of unit is lost at least one audio unit between this first audio unit of explanation and the second audio unit.Citing comes It says, it is assumed that the broadcasting start time point of the first audio unit is 00:00:00.046, the broadcasting initial time of the second audio unit Point is 00:00:00.184, the broadcasting duration of each audio unit is 46ms, then in detection the first audio unit and second Between audio unit whether dropped audio unit when, calculate 00:00:00.184 and 00:00:Difference between 00.046, obtains Difference is 138ms, and 138ms is compared with the broadcasting duration 46ms of audio unit, and obtained comparison result is difference More than the broadcasting duration of audio unit, illustrate after the first audio unit plays, 00:00:00.092-00:00: There is no audio unit that can play between 00.184, and then it was determined that is lost between the first audio unit and the second audio unit At least one audio unit is lost.
If obtained difference is equal to the broadcasting duration of audio unit, illustrate to rise according to the broadcasting of the first audio unit Begin after the first audio unit of time point broadcasting, and then can play second according to the broadcasting start time point of the second audio unit Audio unit, centre does not have neutral gear, thus may determine that not having dropped audio between the first audio unit and the second audio unit Unit.
Be not in such case for normal by being if obtained difference is less than the broadcasting duration of audio unit , in the event of illustrating the broadcasting start time point for playing start time point or the second audio unit of the first audio unit There is mistake, when can continue according to the broadcasting start time point and broadcasting of the previous audio unit of the first audio unit Between, determination is the broadcasting start time point for playing start time point and mistake or the second audio unit occur of the first audio unit There is mistake.After determination, the broadcasting start time point of the audio unit of mistake can be corrected, or the sound that mistake will occur Frequency unit is given up, which can be set by technical staff, and the present invention does not limit herein.
In a step 102, if dropped audio unit between the first audio unit and the second audio unit, it is determined that first The broadcasting end time point of audio unit.
In one possible embodiment, determines lost between the first audio unit and the second audio unit through the above steps After audio unit, according to the broadcasting duration for playing start time point and audio unit of the first audio unit, the is determined The broadcasting end time point of one audio unit.
Optionally, the method for the broadcasting end time point of the first audio unit of above-mentioned determination can be as follows:Calculate the first sound At the time point that the broadcasting start time point of frequency unit obtains after being added with broadcasting duration, the first audio will be determined as time point The broadcasting end time point of unit.
In one possible embodiment, viewing terminal calculates the broadcasting start time point of the first audio unit and broadcasting continues The adduction of duration, obtains a time point, which, which is the first audio unit, after broadcasting, holds from the broadcasting start time point The time point for playing and stopping after duration, the broadcasting end time point of as the first audio unit are continued.
In step 103, mute audio unit is added between the first audio unit and the second audio unit.
Wherein, the broadcasting start time point of mute audio unit is the broadcasting end time point of the first audio unit, mute Audio unit does not include sampled data.
In one possible embodiment, after the broadcasting end time point for determining the first audio unit through the above steps, see It sees that terminal generates a mute audio unit, mute audio unit is placed in Audio Buffer queue the first audio unit and the Position among two audio units.
It should be noted that above-mentioned audio unit can be FMP4 (Fragmented MP4, fragment type dynamic image expert Group) format audio unit.FMP4 files are that basic unit is constituted by box, these box both may include that data (is Data), can also include metadata (being metadata, the attribute information for describing data).There is one kind in FMP4 files Very important box is moof box (movie fragment box, a kind of data of the attribute information of description audio unit Unit), structure is as shown in Fig. 2, each audio unit deposits the box there are one the type.Moof box storages are audios The metadata information of unit, the attribute information for describing each audio unit.Moof box neutron cells include traf Box (track fragment box, a kind of data cell of the configuration information of storage sampling set), traf box are for storing The configuration information of set is sampled, including plays start time point and plays the information such as duration.Traf box include A kind of tfhd box (track fragment header box, data cell of description sampling aggregate type) and tfdt A kind of box (track fragment decode time box, in description sampling set when the broadcasting starting of first sampled data Between the data cell put).
There are one duration fields, the field can indicate whether there is hits in the audio unit in tfhd box According to.The duration fields of mute audio unit are arranged to empty, that is, indicate do not have sampled data in mute audio field, then broadcast It is not no sound when putting the mute audio unit, plays mute effect.Include one in tfdt box BaseMediaDecodeTime fields, the broadcasting start time point for sampled data to be arranged, in mute audio unit BaseMediaDecodeTime fields are arranged to the broadcasting end time point of above-mentioned first audio unit, i.e. the first audio list Member starts to play mute audio unit after playing.In this way, the audio list lost between the first audio unit and the second audio unit Member is filled with mute audio unit, so that it may so that the first audio unit and the second audio unit normal play.
Optionally, after adding mute audio unit, the processing procedure for playing audio unit can be as follows:When the mute sound of arrival When the broadcasting start time point of frequency unit, start to play mute audio unit;When reaching the broadcasting starting of the second audio unit Between when putting, terminate to play mute audio unit, and start to play the second audio unit.
In one possible embodiment, added between the first audio unit and the second audio unit mute audio unit it Afterwards, when reaching the broadcasting start time point of the first audio unit, the progress of the first audio unit is loaded from audio buffer queue It plays, after playing duration, the first audio unit stops playing.Viewing terminal reads mute audio unit Duration fields are sky, and the broadcasting end time point that baseMediaDecodeTime is the first audio unit, i.e., mute sound Frequency unit plays the broadcasting end time point that start time point is the first audio unit, in this way, when the first audio unit stops When broadcasting, viewing terminal reads and plays mute audio unit, when reaching the broadcasting start time point of the second audio unit, The second audio unit is played, mute audio unit is automatically stopped broadcasting.
This addresses the problem due to dropped audio unit between the first audio unit and the second audio unit, and lead to second The problem of audio unit cannot be according to start time point normal play be played, in turn, can be to avoid video unit and audio unit Not corresponding problem keeps the experience that user watches live streaming more preferable.
Optionally, said program causes video unit not corresponding with audio unit for solving since audio unit is lost The problem of, in addition to this it is possible to the scheme of the broadcasting duration by adjusting video unit, to solve due to video unit The problem lost and cause video unit not corresponding with audio unit, corresponding processing step can be as follows:Obtain continuous receive Target live video the first video unit and the second video unit broadcasting start time point, wherein the first video unit It is the video unit received before receiving the second video unit;Determine the broadcasting start time point and first of the second video unit The interval duration of video unit played between start time point;The broadcasting duration of first video unit is adjusted to be spaced Duration.
It in one possible embodiment, is directed to for video, when user wants to watch by the webpage in viewing terminal When live streaming, user opens the live streaming interface on webpage, and into corresponding live streaming room, the terminal to server that user uses is sent Live video obtains request, which obtains in request and carry live streaming room identification.Server receives terminal transmission Live video obtain request when, according to live video obtain request in live streaming room identification, determine the straight of the terminal request Broadcast the device identification of the corresponding live streaming terminal of video.Then, when live streaming terminal is corresponding broadcasts main be broadcast live, live streaming terminal will It is uploaded onto the server by the collected video unit of image-detection component.After server receives video unit, determine respectively The broadcasting duration of each video unit, since format is identical, the broadcasting duration of each video unit is identical.
Then, server is obtained according to the live video of viewing terminal and is asked, and is determined to video unit and is played initial time Point.It should be noted that the broadcasting start time point of video unit can not be actual time point, but according to video unit The play time that redefines of playing sequence, asked for example, viewing terminal sends live video acquisition for the first time to server When asking, that is, before server sends the 1st video unit to viewing terminal, server is by the broadcasting of the 1st video unit Start time point is determined as 00:00:00.000, it is assumed that the broadcasting duration of each video unit is 60ms, then server will The broadcasting start time point of 2nd video unit is determined as 00:00:00.060, by the broadcasting initial time of the 3rd video unit Point is determined as 00:00:00.120, and so on.
It should be noted that during above-mentioned live streaming terminal to server sends audio unit and video unit, it can Can also be that audio unit is sent simultaneously with video unit to be that audio unit replaces transmission with video unit, specific sender Formula depends on the bearing capacity of server and current network state, and the present invention is without limitation.Similarly, server is to sight During seeing that terminal sends audio unit and video unit, it can be that audio unit replaces transmission with video unit, also may be used To be that audio unit is sent simultaneously with video unit.
Viewing terminal receives broadcasting start time point and the broadcasting of the video unit and video unit of server transmission When duration, video unit is added in video buffer queue.If in video buffer queue, there is video unit (can Referred to as the first video unit) come newly added video unit (can be described as the second video unit) front, it is determined that the first video The broadcasting start time point for playing start time point and the second video unit of unit, and calculate two broadcasting start time points Difference, between broadcasting start time point and the broadcasting start time point of the first video unit of as the second video unit between Every duration.
Then the broadcasting duration of the first video unit is adjusted to the interval duration being calculated.For example, it sees After seeing the second video unit that terminal receives server transmission, the second video unit is added in video cache queue, when Viewing terminal detects when coming the second video unit the first video unit in front, and viewing terminal obtains the first video unit It is 00 to play start time point:00:00.060 and second video unit broadcasting start time point be 00:00:00.180, The difference for then calculating two broadcasting start time points is 120ms, then is adjusted to the broadcasting duration of the first video unit 120ms.If follow-up newly receive a video unit, according to the video unit newly received, the second video unit is adjusted Broadcasting duration.
In this way, without judging whether lose video unit between two video units, video can be made Video unit in buffer queue is continuously played according to respective broadcasting start time point, is solved and is led due to losing video unit The problem for causing video unit not corresponding with audio unit keeps the experience that user watches live streaming more preferable.
In the embodiment of the present invention, when dropped audio between two audio units for determining the target live video continuously received When unit, make the first audio instead of the audio unit of loss by adding the mute audio unit that one does not include sampled data Unit and the second audio unit can continuously be played according to respective broadcasting start time point, be avoided due to dropped audio unit And video unit and audio unit is caused to become not corresponding problem, solve the video unit and audio unit for being possible to occur Nonsynchronous problem.
Based on the same technical idea, the embodiment of the present invention additionally provides a kind of device of adjustment audio unit, the device Can be the viewing terminal in above-described embodiment, as shown in figure 3, the device includes:Determining module 310 and add module 320.
The determining module 310 is configured to determine that the first audio unit and second of the target live video continuously received Between audio unit whether dropped audio unit, wherein first audio unit be receive second audio unit it The audio unit of preceding reception;
The determining module 310, if being additionally configured to lose between first audio unit and second audio unit Lose audio unit, it is determined that the broadcasting end time point of first audio unit;
The add module 320 is configured as adding between first audio unit and second audio unit quiet Sound audio unit, wherein the broadcasting start time point of the mute audio unit is that the broadcasting of first audio unit terminates Time point, the mute audio unit do not include sampled data.
Optionally, the determining module 310, is configured as:
Calculate the broadcasting between the first audio unit and the second audio unit of the target live video continuously received The difference of start time point;
If the difference is more than the broadcasting duration of first audio unit, it is determined that first audio unit The dropped audio unit between second audio unit.
Optionally, the determining module 310, is configured as:
Calculate obtained after the broadcasting start time point of first audio unit is added with the broadcasting duration when Between point, the time point is determined as to the broadcasting end time point of first audio unit.
Optionally, as shown in figure 4, described device further includes:
Playing module 330 is configured as, when reaching the broadcasting start time point of the mute audio unit, starting to play The mute audio unit;
Playing module 330 is additionally configured to, when reaching the broadcasting start time point of second audio unit, terminate to broadcast The mute audio unit is put, and starts to play second audio unit.
Optionally, as shown in figure 5, described device further includes:
Acquisition module 340 is configured as obtaining the first video unit of the target live video continuously received and The broadcasting start time point of two video units, wherein first video unit is received before receiving the second video unit Video unit;
Determining module 310 is additionally configured to determine the broadcasting start time point and described first of second video unit The interval duration of video unit played between start time point;
Module 350 is adjusted, when being configured as the broadcasting duration of first video unit being adjusted to the interval It is long.
In the embodiment of the present invention, when dropped audio between two audio units for determining the target live video continuously received When unit, make the first audio instead of the audio unit of loss by adding the mute audio unit that one does not include sampled data Unit and the second audio unit can continuously be played according to respective broadcasting start time point, be avoided due to dropped audio unit And video unit and audio unit is caused to become not corresponding problem, solve the video unit and audio unit for being possible to occur Nonsynchronous problem.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
It should be noted that:Above-described embodiment provide adjustment audio unit device when adjusting audio unit, only with The division progress of above-mentioned each function module, can be as needed and by above-mentioned function distribution by not for example, in practical application Same function module is completed, i.e., the internal structure of equipment is divided into different function modules, to complete whole described above Or partial function.In addition, the method for the device and adjustment audio unit for the adjustment audio unit that above-described embodiment provides is implemented Example belongs to same design, and specific implementation process refers to embodiment of the method, and which is not described herein again.
Fig. 6 is a kind of structure diagram of terminal provided in an embodiment of the present invention.The terminal 600 can be Portable movable end End, such as:Smart mobile phone, tablet computer.Terminal 600 is also possible to be referred to as other titles such as user equipment, portable terminal.
In general, terminal 600 includes:Processor 601 and memory 602.
Processor 601 may include one or more processing cores, such as 4 core processors, 6 core processors etc..Place DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- may be used in reason device 601 Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 601 can also include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.? In some embodiments, processor 601 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 601 can also wrap AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processors are for handling related machine learning Calculating operation.
Memory 602 may include one or more computer readable storage mediums, which can To be tangible and non-transient.Memory 602 may also include high-speed random access memory and nonvolatile memory, Such as one or more disk storage equipments, flash memory device.In some embodiments, non-transient in memory 602 Computer readable storage medium for storing at least one instruction, at least one instruction for by processor 601 it is performed with The method for realizing adjustment audio unit provided herein.
In some embodiments, terminal 600 is also optional includes:Peripheral device interface 603 and at least one peripheral equipment. Specifically, peripheral equipment includes:Radio circuit 604, touch display screen 605, camera 606, voicefrequency circuit 607, positioning component At least one of 608 and power supply 609.
Peripheral device interface 603 can be used for I/O (Input/Output, input/output) is relevant at least one outer Peripheral equipment is connected to processor 601 and memory 602.In some embodiments, processor 601, memory 602 and peripheral equipment Interface 603 is integrated on same chip or circuit board;In some other embodiments, processor 601, memory 602 and outer Any one or two in peripheral equipment interface 603 can realize on individual chip or circuit board, the present embodiment to this not It is limited.
Radio circuit 604 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates Frequency circuit 604 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 604 turns electric signal It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 604 wraps It includes:Antenna system, RF transceivers, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip Group, user identity module card etc..Radio circuit 604 can be carried out by least one wireless communication protocol with other terminals Communication.The wireless communication protocol includes but not limited to:WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G, 4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, it penetrates Frequency circuit 604 can also include the related circuits of NFC (Near Field Communication, wireless near field communication), this Application is not limited this.
Touch display screen 605 is for showing UI (User Interface, user interface).The UI may include figure, text Sheet, icon, video and its their arbitrary combination.Touch display screen 605 also have acquisition on the surface of touch display screen 605 or The ability of the touch signal of surface.The touch signal can be used as control signal to be input to processor 601 and be handled.It touches Display screen 605 is touched for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or soft keyboard.In some embodiments In, touch display screen 605 can be one, and the front panel of terminal 600 is arranged;In further embodiments, touch display screen 605 It can be at least two, be separately positioned on the different surfaces of terminal 600 or in foldover design;In still other embodiments, touch Display screen 605 can be flexible display screen, be arranged on the curved surface of terminal 600 or fold plane on.Even, touch display screen 605 can also be arranged to non-rectangle irregular figure, namely abnormity screen.LCD (Liquid may be used in touch display screen 605 Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) Etc. materials prepare.
CCD camera assembly 606 is for acquiring image or video.Optionally, CCD camera assembly 606 include front camera and Rear camera.In general, front camera is for realizing video calling or self-timer, rear camera is for realizing photo or video Shooting.In some embodiments, rear camera at least two are main camera, depth of field camera, wide-angle imaging respectively Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle Pan-shot and VR (Virtual Reality, virtual reality) shooting function are realized in camera fusion.In some embodiments In, CCD camera assembly 606 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp, can also be double-colored temperature flash of light Lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for the light compensation under different-colour.
Voicefrequency circuit 607 is used to provide the audio interface between user and terminal 600.Voicefrequency circuit 607 may include wheat Gram wind and loud speaker.Microphone is used to acquire the sound wave of user and environment, and converts sound waves into electric signal and be input to processor 601 are handled, or are input to radio circuit 604 to realize voice communication.For stereo acquisition or the purpose of noise reduction, wheat Gram wind can be multiple, be separately positioned on the different parts of terminal 600.Microphone can also be array microphone or omnidirectional's acquisition Type microphone.Loud speaker is then used to the electric signal from processor 601 or radio circuit 604 being converted to sound wave.Loud speaker can Can also be piezoelectric ceramic loudspeaker to be traditional wafer speaker.When loud speaker is piezoelectric ceramic loudspeaker, not only may be used To convert electrical signals to the audible sound wave of the mankind, the sound wave that the mankind do not hear can also be converted electrical signals to survey Away from etc. purposes.In some embodiments, voicefrequency circuit 607 can also include earphone jack.
Positioning component 608 is used for the current geographic position of positioning terminal 600, to realize navigation or LBS (Location Based Service, location based service).Positioning component 608 can be the GPS (Global based on the U.S. Positioning System, global positioning system), China dipper system or Russia Galileo system positioning group Part.
Power supply 609 is used to be powered for the various components in terminal 600.Power supply 609 can be alternating current, direct current, Disposable battery or rechargeable battery.When power supply 609 includes rechargeable battery, which can be wired charging electricity Pond or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is by wireless The battery of coil charges.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 600 further include there are one or multiple sensors 610.The one or more sensors 610 include but not limited to:Acceleration transducer 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, Optical sensor 615 and proximity sensor 616.
The acceleration that acceleration transducer 611 can detect in three reference axis of the coordinate system established with terminal 600 is big It is small.For example, acceleration transducer 611 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 601 can With the acceleration of gravity signal acquired according to acceleration transducer 611, control touch display screen 605 is regarded with transverse views or longitudinal direction Figure carries out the display of user interface.Acceleration transducer 611 can be also used for game or the acquisition of the exercise data of user.
Gyro sensor 612 can be with the body direction of detection terminal 600 and rotational angle, and gyro sensor 612 can To cooperate with acquisition user to act the 3D of terminal 600 with acceleration transducer 611.Processor 601 is according to gyro sensor 612 Following function may be implemented in the data of acquisition:When action induction (for example changing UI according to the tilt operation of user), shooting Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 605 in terminal 600 can be arranged in pressure sensor 613.Work as pressure The gripping signal that user can be detected in the side frame of terminal 600 to terminal 600 is arranged in sensor 613, is believed according to the gripping Number carry out right-hand man's identification or prompt operation.When pressure sensor 613 is arranged in the lower layer of touch display screen 605, Ke Yigen According to user to the pressure operation of touch display screen 605, realization controls the operability control on the interfaces UI.Operability Control includes at least one of button control, scroll bar control, icon control, menu control.
Fingerprint sensor 614 is used to acquire the fingerprint of user, with according to the identity of collected fingerprint recognition user.Knowing When the identity for not going out user is trusted identity, the user is authorized to execute relevant sensitive operation, the sensitive operation by processor 601 Including solution lock screen, check encryption information, download software, payment and change setting etc..End can be set in fingerprint sensor 614 Front, the back side or the side at end 600.When being provided with physical button or manufacturer Logo in terminal 600, fingerprint sensor 614 can To be integrated with physical button or manufacturer Logo.
Optical sensor 615 is for acquiring ambient light intensity.In one embodiment, processor 601 can be according to optics The ambient light intensity that sensor 615 acquires controls the display brightness of touch display screen 605.Specifically, when ambient light intensity is higher When, the display brightness of touch display screen 605 is turned up;When ambient light intensity is relatively low, the display for turning down touch display screen 605 is bright Degree.In another embodiment, the ambient light intensity that processor 601 can also be acquired according to optical sensor 615, dynamic adjust The acquisition parameters of CCD camera assembly 606.
Proximity sensor 616, also referred to as range sensor are generally arranged at the front of terminal 600.Proximity sensor 616 is used In the distance between the front of acquisition user and terminal 600.In one embodiment, when proximity sensor 616 detects user When the distance between front of terminal 600 tapers into, touch display screen 605 is controlled by processor 601 and is cut from bright screen state It is changed to breath screen state;When proximity sensor 616 detects user and the distance between the front of terminal 600 becomes larger, by Processor 601 controls touch display screen 605 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of structure shown in Fig. 6 not structure paired terminal 600, can wrap It includes than illustrating more or fewer components, either combine certain components or is arranged using different components.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, is stored at least in storage medium One instruction, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, code set or instruction set It is loaded by processor and is executed to realize the identification maneuver class method for distinguishing in above-described embodiment.For example, described computer-readable Storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
In the embodiment of the present invention, when dropped audio between two audio units for determining the target live video continuously received When unit, make the first audio instead of the audio unit of loss by adding the mute audio unit that one does not include sampled data Unit and the second audio unit can continuously be played according to respective broadcasting start time point, be avoided due to dropped audio unit And video unit and audio unit is caused to become not corresponding problem, solve the video unit and audio unit for being possible to occur Nonsynchronous problem.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.

Claims (12)

1. a kind of method of adjustment audio unit, which is characterized in that the method includes:
Determine between the first audio unit and the second audio unit of target live video continuously received whether dropped audio list Member, wherein first audio unit is the audio unit received before receiving second audio unit;
If dropped audio unit between first audio unit and second audio unit, it is determined that first audio The broadcasting end time point of unit;
Mute audio unit is added between first audio unit and second audio unit, wherein the mute sound Frequency unit plays the broadcasting end time point that start time point is first audio unit, and the mute audio unit does not wrap Containing sampled data.
2. according to the method described in claim 1, it is characterized in that, the target live video that the determination continuously receives Between first audio unit and the second audio unit whether dropped audio unit, including:
Calculate the broadcasting starting between the first audio unit and the second audio unit of the target live video continuously received The difference at time point;
If the difference is more than the broadcasting duration of first audio unit, it is determined that first audio unit and institute State dropped audio unit between the second audio unit.
3. according to the method described in claim 1, it is characterized in that, at the end of the broadcasting of the determination first audio unit Between point, including:
The broadcasting start time point of first audio unit and the time point for playing and being obtained after duration is added are calculated, The time point is determined as to the broadcasting end time point of first audio unit.
4. according to the method described in claim 1, it is characterized in that, the method further includes:
When reaching the broadcasting start time point of the mute audio unit, start to play the mute audio unit;
When reaching the broadcasting start time point of second audio unit, terminate to play the mute audio unit, and start Play second audio unit.
5. according to the method described in claim 1, it is characterized in that, the method further includes:
Obtain the broadcasting initial time of the first video unit and the second video unit of the target live video continuously received Point, wherein first video unit is the video unit received before receiving the second video unit;
Determine second video unit the broadcasting start time point for playing start time point and first video unit it Between interval duration;
The broadcasting duration of first video unit is adjusted to the interval duration.
6. a kind of device of adjustment audio unit, which is characterized in that described device includes:
Determining module is between the first audio unit and the second audio unit of the target live video that continuously receive for determining No dropped audio unit, wherein first audio unit is the audio list received before receiving second audio unit Member;
Determining module, if being additionally operable to dropped audio unit between first audio unit and second audio unit, Determine the broadcasting end time point of first audio unit;
Add module, for adding mute audio unit between first audio unit and second audio unit, In, the broadcasting start time point of the mute audio unit is the broadcasting end time point of first audio unit, described quiet Sound audio unit does not include sampled data.
7. device according to claim 6, which is characterized in that the determining module is used for:
Calculate the broadcasting starting between the first audio unit and the second audio unit of the target live video continuously received The difference at time point;
If the difference is more than the broadcasting duration of first audio unit, it is determined that first audio unit and institute State dropped audio unit between the second audio unit.
8. device according to claim 6, which is characterized in that the determining module is used for:
The broadcasting start time point of first audio unit and the time point for playing and being obtained after duration is added are calculated, The time point is determined as to the broadcasting end time point of first audio unit.
9. device according to claim 6, which is characterized in that described device further includes:
Playing module, for when reaching the broadcasting start time point of the mute audio unit, starting to play the mute sound Frequency unit;
Playing module is additionally operable to when reaching the broadcasting start time point of second audio unit, terminates to play described mute Audio unit, and start to play second audio unit.
10. device according to claim 6, which is characterized in that described device further includes:
Acquisition module, the first video unit and the second video unit for obtaining the target live video continuously received Play start time point, wherein first video unit is the video unit received before receiving the second video unit;
Determining module is additionally operable to determine the broadcasting start time point of second video unit and broadcasting for first video unit Put the interval duration between start time point;
Module is adjusted, for the broadcasting duration of first video unit to be adjusted to the interval duration.
11. a kind of terminal, which is characterized in that the terminal includes processor and memory, is stored at least in the memory One instruction, at least one section of program, code set or instruction set, at least one instruction, at least one section of program, the generation Code collection or instruction set are loaded by the processor and are executed to realize the adjustment audio unit as described in claim 1 to 5 is any Method.
12. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, extremely in the storage medium Few one section of program, code set or instruction set, at least one instruction, at least one section of program, the code set or the instruction Collection is loaded by the processor and is executed the method to realize the adjustment audio unit as described in claim 1 to 5 is any.
CN201810646008.3A 2018-06-21 2018-06-21 The method and apparatus for adjusting audio unit Pending CN108769770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810646008.3A CN108769770A (en) 2018-06-21 2018-06-21 The method and apparatus for adjusting audio unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810646008.3A CN108769770A (en) 2018-06-21 2018-06-21 The method and apparatus for adjusting audio unit

Publications (1)

Publication Number Publication Date
CN108769770A true CN108769770A (en) 2018-11-06

Family

ID=63976117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810646008.3A Pending CN108769770A (en) 2018-06-21 2018-06-21 The method and apparatus for adjusting audio unit

Country Status (1)

Country Link
CN (1) CN108769770A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944225A (en) * 2019-11-20 2020-03-31 武汉长江通信产业集团股份有限公司 HTML 5-based method and device for synchronizing audio and video with different frame rates
CN112995720A (en) * 2019-12-16 2021-06-18 成都鼎桥通信技术有限公司 Audio and video synchronization method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215429A (en) * 2010-04-01 2011-10-12 安凯(广州)微电子技术有限公司 Recording method for mobile TV
US20150236733A1 (en) * 2014-02-14 2015-08-20 Motorola Solutions, Inc. Method and apparatus for improving audio reception in a paging device
CN104978966A (en) * 2014-04-04 2015-10-14 腾讯科技(深圳)有限公司 Method and apparatus realizing compensation of frame loss in audio stream
CN105578265A (en) * 2015-12-10 2016-05-11 杭州当虹科技有限公司 Timestamp compensation or correction method based on H264/H265 video analysis
CN105744334A (en) * 2016-02-18 2016-07-06 海信集团有限公司 Method and equipment for audio and video synchronization and synchronous playing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102215429A (en) * 2010-04-01 2011-10-12 安凯(广州)微电子技术有限公司 Recording method for mobile TV
US20150236733A1 (en) * 2014-02-14 2015-08-20 Motorola Solutions, Inc. Method and apparatus for improving audio reception in a paging device
CN104978966A (en) * 2014-04-04 2015-10-14 腾讯科技(深圳)有限公司 Method and apparatus realizing compensation of frame loss in audio stream
CN105578265A (en) * 2015-12-10 2016-05-11 杭州当虹科技有限公司 Timestamp compensation or correction method based on H264/H265 video analysis
CN105744334A (en) * 2016-02-18 2016-07-06 海信集团有限公司 Method and equipment for audio and video synchronization and synchronous playing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944225A (en) * 2019-11-20 2020-03-31 武汉长江通信产业集团股份有限公司 HTML 5-based method and device for synchronizing audio and video with different frame rates
CN110944225B (en) * 2019-11-20 2022-10-04 武汉长江通信产业集团股份有限公司 HTML 5-based method and device for synchronizing audio and video with different frame rates
CN112995720A (en) * 2019-12-16 2021-06-18 成都鼎桥通信技术有限公司 Audio and video synchronization method and device
CN112995720B (en) * 2019-12-16 2022-11-18 成都鼎桥通信技术有限公司 Audio and video synchronization method and device

Similar Documents

Publication Publication Date Title
US20200194027A1 (en) Method and apparatus for displaying pitch information in live webcast room, and storage medium
CN108063981B (en) Method and device for setting attributes of live broadcast room
CN108401124A (en) The method and apparatus of video record
US20200285439A1 (en) Method and apparatus of playing audio data
CN107888968A (en) Player method, device and the computer-readable storage medium of live video
CN109348247A (en) Determine the method, apparatus and storage medium of audio and video playing timestamp
CN109618212A (en) Information display method, device, terminal and storage medium
CN109657165A (en) Method for page jump and device
CN110278464A (en) The method and apparatus for showing list
CN110213608A (en) Show method, apparatus, equipment and the readable storage medium storing program for executing of virtual present
CN109033335A (en) Audio recording method, apparatus, terminal and storage medium
CN107896337B (en) Information popularization method and device and storage medium
CN109327608A (en) Method, terminal, server and the system that song is shared
CN110401898B (en) Method, apparatus, device and storage medium for outputting audio data
CN109688461A (en) Video broadcasting method and device
EP3618055A1 (en) Audio mixing method and apparatus, and storage medium
CN110248236A (en) Video broadcasting method, device, terminal and storage medium
CN110276034A (en) Content item methods of exhibiting, device, computer equipment and storage medium
CN108845777A (en) The method and apparatus for playing frame animation
CN110808021B (en) Audio playing method, device, terminal and storage medium
CN109800003A (en) Using method for down loading, device, terminal and storage medium
CN111092991B (en) Lyric display method and device and computer storage medium
CN109089137A (en) Caton detection method and device
CN109218751A (en) The method, apparatus and system of recommendation of audio
CN108319712A (en) The method and apparatus for obtaining lyrics data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106