CN109600649A - Method and apparatus for handling data - Google Patents

Method and apparatus for handling data Download PDF

Info

Publication number
CN109600649A
CN109600649A CN201810864302.1A CN201810864302A CN109600649A CN 109600649 A CN109600649 A CN 109600649A CN 201810864302 A CN201810864302 A CN 201810864302A CN 109600649 A CN109600649 A CN 109600649A
Authority
CN
China
Prior art keywords
frame
video data
data
audio
timestamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810864302.1A
Other languages
Chinese (zh)
Inventor
宫昀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201810864302.1A priority Critical patent/CN109600649A/en
Publication of CN109600649A publication Critical patent/CN109600649A/en
Priority to PCT/CN2019/098505 priority patent/WO2020024960A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The embodiment of the present application discloses the method and apparatus for handling data.One specific embodiment of this method includes: acquisition audio, video data, which includes audio data and video data;The acquisition time of the first frame of the video data is determined as to the initial time of video data;For the frame in the video data, the acquisition time based on the initial time He the frame determines the timestamp of the frame;The audio data and the video data comprising timestamp are stored.This embodiment improves the accuracys of the timestamp of the frame in video data.

Description

Method and apparatus for handling data
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for handling data.
Background technique
When recording primary sound video, need to guarantee camera video data collected and microphone audio number collected According to synchronization.In the application with video record function, it is more normal that the nonsynchronous situation of audio-video occurs in the primary sound video of recording See.Due to the otherness between terminal device (such as mobile phone, tablet computer etc.), realization is recorded on different terminal equipment Audio-visual synchronization, difficulty with higher.
In relevant mode, it is generally recognized that the interval time of adjacent two frame in video data is fixed.For video The sum of the timestamp of previous frame and the interval time, are determined as the timestamp of the frame by certain frame in data.In turn, by the time Stamp is recorded in recorded video data.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for handling data.
In a first aspect, the embodiment of the present application provides a kind of method for handling data, this method comprises: acquisition sound view Frequency evidence, audio, video data include audio data and video data;The acquisition time of the first frame of video data is determined as video The initial time of data;For the frame in video data, the acquisition time based on initial time He the frame determines the time of the frame Stamp;Audio data and the video data comprising timestamp are stored.
In some embodiments, for the frame in video data, the acquisition time based on initial time He the frame, determining should The timestamp of frame, comprising: in response to determining that audio, video data is the data of continuous acquisition, for the frame in video data, by this The acquisition time of frame and the difference of initial time are determined as the timestamp of the frame.
In some embodiments, for the frame in video data, the acquisition time based on initial time He the frame, determining should The timestamp of frame, comprising: in response to determining that audio, video data is the data of piecewise acquisition, for each point of audio, video data Section, based on the data volume of the audio data in the segmentation, determines the duration of the segmentation;Based on initial time, audio, video data The acquisition time of frame in the duration and video data of segmentation determines the timestamp of the frame in video data.
In some embodiments, based on the frame in initial time, the duration and video data of the segmentation of audio, video data Acquisition time determines the timestamp of the frame in video data, comprising: for the frame of the video data in first section audio, video data, The difference of the acquisition time of the frame and initial time is determined as to the timestamp of the frame;For the view in non-first section audio, video data The frame of frequency evidence, the segmentation for the audio, video data which is located at is as target segment, by the video data in target segment First frame as target frame, determine the difference of the acquisition time of the frame and the acquisition time of target frame, and, determine target segment The sum of duration summation and difference, is determined as the timestamp of the frame by the pervious duration summation being respectively segmented.
In some embodiments, for the frame in video data, the acquisition time based on initial time He the frame, determining should The timestamp of frame, comprising: there is pause recording time section during audio-video is recorded in response to determining, for first section audio-video The difference of the acquisition time of the frame and initial time is determined as the timestamp of the frame by the frame of the video data in data;For Frame in the video data of the audio, video data of remaining section, the acquisition time, initial time based on the frame and adopting in the frame Suspend the duration recorded before the collection time, determines the timestamp of the frame.
In some embodiments, audio data and video data comprising timestamp are stored, comprising: by audio number It is encoded respectively according to the video data comprising timestamp;By the video data storage after the audio data and coding after coding In same file.
Second aspect, the embodiment of the present application provide a kind of for handling the device of data, which includes: that acquisition is single Member, is configured to acquire audio, video data, and audio, video data includes audio data and video data;First determination unit is matched It is set to the initial time that the acquisition time of the first frame of video data is determined as to video data;Second determination unit, is configured to For the frame in video data, the acquisition time based on initial time He the frame determines the timestamp of the frame;Storage unit, quilt It is configured to store audio data and video data comprising timestamp.
In some embodiments, the second determination unit, comprising: the first determining module is configured in response to determine sound view Frequency is true by the difference of the acquisition time of the frame and initial time for the frame in video data according to the data for being continuous acquisition It is set to the timestamp of the frame.
In some embodiments, the second determination unit, comprising: the second determining module is configured in response to determine sound view Frequency evidence is the data of piecewise acquisition, each segmentation for audio, video data, the number based on the audio data in the segmentation According to amount, the duration of the segmentation is determined;Third determining module, be configured to based on initial time, audio, video data segmentation when The acquisition time of frame in long and video data, determines the timestamp of the frame in video data.
In some embodiments, third determining module, comprising: first determines submodule, is configured to regard first section sound The difference of the acquisition time of the frame and initial time, is determined as the timestamp of the frame by the frame of video data of the frequency in;The Two determine submodule, are configured to the frame for the video data in non-first section audio, video data, and the sound which is located at regards The segmentation of frequency evidence determines adopting for the frame using the first frame of the video data in target segment as target frame as target segment Collect the difference of the acquisition time of time and target frame, and, the pervious duration summation being respectively segmented of target segment is determined, by duration The sum of summation and difference is determined as the timestamp of the frame.
In some embodiments, the second determination unit, comprising: the 4th determining module is configured in response to determine sound view There is pause recording time section in frequency, for the frame of the video data in first section audio, video data, by the frame during recording The difference of acquisition time and initial time is determined as the timestamp of the frame;For in the video data of the audio, video data of remaining section Frame, acquisition time, initial time based on the frame and suspended the duration recorded before the acquisition time of the frame, determine The timestamp of the frame.
In some embodiments, storage unit, comprising: coding module is configured to by audio data and comprising timestamp Video data encoded respectively;Memory module, the video data after audio data and coding after being configured to encode It is stored in same file.
The third aspect, the embodiment of the present application provide a kind of terminal device, comprising: one or more processors;Storage dress Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or Multiple processors realize the method such as any embodiment in the method for handling data.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method such as any embodiment in the method for handling data is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for handling data, by will be in audio, video data collected The acquisition time of first frame of video data be determined as the initial time of video data, then for the frame in video data, base In the acquisition time of initial time and the frame, the timestamp of the frame is determined, finally by audio data and include the video of timestamp Data are stored, thus, avoid video data acquiring it is unstable in the case where (such as apparatus overheat, performance deficiency cause Frame losing), the problem of timestamp inaccuracy caused by the calculating of the timestamp of frame is carried out according to same time interval, improves institute The accuracy of the timestamp of frame in determining video data.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for handling data of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for handling data of the application;
Fig. 4 is the flow chart according to another embodiment of the method for handling data of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for handling data of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the terminal device of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for handling the method for data or the example of the device for handling data Property system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message (such as audio, video data upload request) etc..Various communication clients can be installed on terminal device 101,102,103 End application, such as the application of video record class, the application of audio broadcast message class, instant messaging tools, mailbox client, social platform software Deng.
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with display screen and audio-video recording, including but not limited to smart phone, plate electricity Brain, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is software, may be mounted at In above-mentioned cited electronic equipment.Multiple softwares or software module may be implemented into (such as providing distributed clothes in it Business), single software or software module also may be implemented into.It is not specifically limited herein.
Terminal device 101,102,103 can be equipped with image collecting device (such as camera), to acquire video data. In practice, the minimum vision unit for forming video is frame (Frame).Each frame is the image of width static state.It will be continuous in time Frame sequence be synthesized to and just form dynamic video together.In addition, terminal device 101,102,103 is also equipped with audio collection Device (such as microphone), to acquire continuous analog audio signal.In practice, set with certain frequency to from microphone etc. Standby continuous analog audio signal carries out analog-to-digital conversion (Analogue-to-Digital Conversion, ADC) gained afterwards To data be audio data.
Terminal device 101,102,103 can use the image collecting device being mounted thereon and audio collecting device difference Carry out the acquisition of video data and audio data.And it is possible to carry out timestamp calculating etc. to the collected video data of institute Reason finally stores processing result (such as collected audio data and include the video data of timestamp).
Server 105 can be to provide the server of various services, such as to being installed on terminal device 101,102,103 Video record class application provide support background server.Background server can upload received audio, video data The data such as request such as are parsed, are stored at the processing.It can be with audio, video data transmitted by receiving terminal apparatus 101,102,103 Acquisition request, and audio, video data indicated by the audio, video data acquisition request is fed back into terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software also may be implemented into Module.It is not specifically limited herein.
It should be noted that provided by the embodiment of the present application for handle the methods of data generally by terminal device 101, 102, it 103 executes, correspondingly, the device for handling data is generally positioned in terminal device 101,102,103.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for handling data according to the application is shown 200.The method for being used to handle data, comprising the following steps:
Step 201, audio, video data is acquired.
In the present embodiment, for handle the method for data executing subject (such as terminal device shown in FIG. 1 101, 102, image collecting device (such as camera) and audio signal sample device (such as microphone) 103) can be installed.It is above-mentioned Executing subject can open above-mentioned image collecting device and above-mentioned audio signal sample device simultaneously, and utilize above-mentioned Image Acquisition Device and above-mentioned audio signal sample device acquire audio, video data.Wherein, above-mentioned audio, video data includes audio data (voice data) and video data (vision data).
In practice, video data can be described with frame (Frame).Here, frame is the minimum vision unit for forming video. Each frame is the image of width static state.Frame sequence continuous in time is synthesized to and just forms dynamic video together.
In practice, audio data is the data after digitizing to voice signal.The digitized process of voice signal is The continuous analog audio signal from equipment such as microphones is converted into digital signal with certain frequency and obtains audio data Process.The digitized process of voice signal generally comprises three sampling, quantization and coding steps.Wherein, sampling refers to every Original signal continuous in time is replaced every the sample of signal value sequence of certain time interval.Quantization, which refers to, uses limited amplitude The approximate representation range value of consecutive variations in time originally, the continuous amplitude of analog signal become limited quantity, have it is certain The discrete value of time interval.Coding then refers to according to certain rule, and the discrete value after quantization is indicated with binary numeral.It is logical Often, for the digitized process of voice signal there are two important index, respectively sample frequency (Sampling Rate) and sampling is big Small (Sampling Size).Wherein, sample frequency is also referred to as sample rate or sample rate.Sample frequency can be it is per second from The number of samples of discrete signal is extracted and formed in continuous signal.Sample frequency can be indicated with hertz (Hz).Sample size It can be indicated with bit (bit).Herein, pulse code modulation (Pulse Code Modulation, PCM) may be implemented by Analog audio signal through over-sampling, quantization, code conversion at digitized audio data.Therefore, above-mentioned audio data can be with It is the data of pcm encoder format.
Video record class application can be installed in practice, in above-mentioned executing subject.The video record class application can prop up Hold the recording of primary sound video.Wherein, above-mentioned primary sound video can be the video of the background sound using video primary sound as video.On Stating video primary sound can be during video record, and using audio signal sample device (such as microphone etc.), institute is collected Audio.User can be by clicking video record key, to trigger video record in the runnable interface that video record class is applied System instruction.Above-mentioned executing subject can open above-mentioned image collecting device and above-mentioned after receiving video record instruction simultaneously Audio collecting device carries out the recording of primary sound video.
Step 202, the acquisition time of the first frame of video data is determined as to the initial time of video data.
In the present embodiment, above-mentioned executing subject can recorde acquisition time in each frame for collecting video data. The acquisition time of each frame can be the system timestamp (such as unix timestamp) when collecting the frame.It should be noted that Timestamp (timestamp) is that a data can be indicated some particular moment is already existing, the complete, number that can verify that According to.In general, timestamp is a character string, the time at certain a moment is uniquely identified.Herein, above-mentioned executing subject can will be upper The acquisition time for stating the first frame of video data is determined as the initial time of video data.In practice, which can be regarded For 0 moment of video data.
Step 203, for the frame in video data, the acquisition time based on initial time He the frame, determine the frame when Between stab.
In relevant mode, it is generally recognized that the interval time of adjacent two frame in video data is fixed.For video The sum of the timestamp of previous frame and the interval time, are usually determined as the timestamp of the frame by certain frame in data.However, regarding Frequency according to acquire it is unstable in the case where (such as apparatus overheat, performance deficiency lead to frame losing), adjacent two frame in video data Interval time be not fixed.The timestamp that frame is determined according to Fixed Time Interval will lead to the time in video data Stamp inaccuracy.
In the present embodiment, for the frame in above-mentioned video data, above-mentioned executing subject can be based on above-mentioned initial time With the acquisition time of the frame, the timestamp of the frame is determined.As an example, if there is no pauses during above-mentioned audio-video is recorded The period of recording can be by the acquisition time of the frame and above-mentioned initial time for each frame in above-mentioned video data Difference is determined as the timestamp of the frame.As another example, if there are one section of pauses to record during above-mentioned audio-video is recorded Period, then continue each frame acquired after recording for pause, can determine first the acquisition time of the frame with it is above-mentioned The difference of initial time;Then, it can will restore recording time and suspend the difference of recording time as the duration of pause recording; Finally, the difference for the duration that the difference and pause of the acquisition time of the frame and above-mentioned initial time are recorded can be determined as the frame Timestamp.As another example, if there is at least one section pause recording time section during above-mentioned audio-video is recorded, for head The difference of the acquisition time of the frame and above-mentioned initial time can be determined as this by the frame in the video data of section audio, video data The timestamp of frame.It, can acquisition time based on the frame, above-mentioned for the frame in the video data of the audio, video data of remaining section Suspend the duration recorded before the acquisition time of initial time and the frame, determines the timestamp of the frame.Specifically, for Each frame collected after primary pause and before suspending for second can determine the acquisition time of the frame and above-mentioned first The difference of time beginning;Then, the difference for the duration that pause is recorded by the difference and for the first time is determined as the timestamp of the frame.For After secondary pause and third time suspend before each frame collected, can determine the acquisition time of the frame and above-mentioned first The difference of time beginning;Then, the sum of the duration for suspending recording before which being subtracted twice determines finally obtained numerical value For the timestamp of the frame.And so on.
In some optional implementations of the present embodiment, above-mentioned executing subject can be based on audio-video number collected According to acquisition mode (for example, continuity way or piecewise acquisition mode) determine the timestamp of each frame in video data. Specifically, when the acquisition mode of audio, video data is continuity way, audio, video data is the data of continuous acquisition.This When, for the frame in above-mentioned video data, the difference of the acquisition time of the frame and above-mentioned initial time can be determined as the frame Timestamp.
Video counts can be accurately determined out using above-mentioned implementation accordingly, for the audio, video data of continuous acquisition The timestamp of each frame in.Further, since audio data is according to the sample frequency of setting, the sample size of setting to sound Signal, which is sampled, quantified etc., to be obtained after operation, and therefore, the data volume of audio data collected per second is fixed.By This, the data volume (i.e. size) of audio data can be used for characterizing or calculating the timestamp of audio data.Due to can be accurate Ground determines the timestamp of video data, and the data volume of audio data can be read directly, and therefore, above-mentioned implementation can make The primary sound video of recording realizes audio-visual synchronization.
In some optional implementations of the present embodiment, when the acquisition mode of audio, video data is piecewise acquisition mode When, audio, video data is the data of piecewise acquisition.At this point, can determine in accordance with the following steps each frame in video data when Between stab:
The first step, each segmentation for above-mentioned audio, video data, based on the data volume of the audio data in the segmentation, Determine the duration of the segmentation.
Herein, since audio data is to adopt according to the sample frequency of setting, the sample size of setting to voice signal It is obtained after the operation such as sample, quantization.Therefore, above-mentioned sample frequency, above-mentioned sample size can be multiplied, determines bit rate (Bit rate).Wherein, the unit of bit rate is bps (Bit Per Second).Ratio of the bit rate to indicate transmission per second Special (bit) number.Here, for each segmentation of above-mentioned audio, video data, the audio data in the segmentation can be determined first Data volume (i.e. size).Then, the ratio of the data volume Yu above-mentioned bit rate is determined.The ratio is then the duration of the segmentation.
Second step, based on the frame in above-mentioned initial time, the duration and video data of the segmentation of above-mentioned audio, video data Acquisition time determines the timestamp of the frame in video data.
Herein, above-mentioned executing subject can be based on the segmentation for the initial time and audio, video data that step 202 is determined Duration determines the initial time of each segmentation.Here, the initial time of first section can be initial time determined by step 202, That is the acquisition time of the first frame of video data.For each segmentation in audio, video data in addition to first section, the starting of the segmentation Time can be equal to the sum of the duration being respectively segmented before the segmentation.Also, the initial time of the segmentation can be the segmentation The timestamp of first frame in video data.As an example, the initial time of the second segmentation can be the duration of the first segmentation.Third The initial time of segmentation can be the duration and the sum of the duration of the second segmentation of the first segmentation.And so on.
It, can be with the acquisition time of each frame in reading video data after the initial time for determining each segmentation.For Each frame in video data can be determined first by the acquisition time for the first frame being segmented where the acquisition time of the frame and the frame Difference.Then, the sum for the initial time being segmented where the difference and the frame can be determined as to the timestamp of the frame.
It should be noted that in above-mentioned implementation, audio, video data be segmented into multistage (such as two sections or two sections with On) acquisition.The acquisition of every section of audio, video data can be while starting image collecting device and audio collecting device, to divide Not carry out video data and audio data acquisition.The pause acquisition that every section of audio-video terminates, can be while suspending image and adopt Acquisition means and audio collecting device are acquired with carrying out pause acquisition and the pause of audio data of video data respectively.
The video of each segmentation can be accurately determined out using above-mentioned implementation accordingly, for the data of piecewise acquisition The timestamp of each frame in data.Further, since audio data is according to the sample frequency of setting, the sample size of setting to sound Sound signal, which is sampled, quantified etc., to be obtained after operation, and therefore, the data volume of audio data collected per second is fixed. The data volume of audio data can be used for characterizing or calculating the timestamp of audio data as a result,.Due to that can accurately determine The timestamp of video data in each segmentation, and the data volume of audio data can be read directly, therefore, above-mentioned implementation can be with Each segmentation in recorded primary sound video is set to realize audio-visual synchronization.Simultaneously as each segmentation can be accurately determined out The timestamp of first frame can be such that the whole primary sound video recorded realizes after each segmentation audio, video data is merged into entirety Audio-visual synchronization.
Step 204, audio data and video data comprising timestamp are stored.
In the present embodiment, above-mentioned executing subject can carry out above-mentioned audio data and video data comprising timestamp Storage.Herein, above-mentioned audio data and video data comprising timestamp can be stored respectively into two files, and Establish the mapping of above-mentioned two file.In addition it is also possible to by above-mentioned audio data and video data comprising timestamp store to In same file.
In some optional implementations of the present embodiment, above-mentioned executing subject can first by above-mentioned audio data and Video data comprising timestamp is encoded respectively.Later, the video data after the audio data and coding after coding is deposited Storage is in same file.In practice, Video coding can refer to through specific compress technique, by the file of some video format It is converted into the mode of another video format file.Audio coding can be compiled using waveform coding, parameter coding, hybrid coding etc. Code mode.It should be noted that audio coding, video coding technique are the well-known techniques studied and applied extensively at present, This is repeated no more.
In some optional implementations of the present embodiment, by above-mentioned audio data and above-mentioned view comprising timestamp After frequency is according to storage, the data stored can also be uploaded to server (such as service shown in FIG. 1 by above-mentioned executing subject Device 105).
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling data of the present embodiment Figure.In the application scenarios of Fig. 3, user's hand-held terminal device 301 carries out the recording of primary sound video.It is run in terminal device 301 There is short video record class application.User clicked in the interface that the short video record class is applied primary sound video record key it Afterwards, terminal device 301 opens microphone and camera simultaneously, carries out the acquisition of audio data 302 and video data 303 respectively. After collecting the first frame of video data, when the acquisition time of the head frame is determined as the starting of video data by terminal device 301 Between.For the then collected each frame of institute, acquisition time of the terminal device 301 based on above-mentioned initial time He the frame is determined The timestamp of the frame.After each frame time stamp determines, terminal device 301 is by the collected audio data of institute and with having time The video data of stamp is stored in file 304.
The method provided by the above embodiment of the application, by by the head of the video data in audio, video data collected The acquisition time of frame is determined as the initial time of video data, then for the frame in video data, based on initial time and is somebody's turn to do The acquisition time of frame determines the timestamp of the frame, finally deposits above-mentioned audio data and video data comprising timestamp Storage, thus, avoid video data acquiring it is unstable in the case where (such as apparatus overheat, performance deficiency lead to frame losing), according to Fixed Time Interval carries out the problem of timestamp inaccuracy caused by the calculating of the timestamp of frame, improves identified video The accuracy of the timestamp of frame in data.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling data.The use In the process 400 of the method for processing data, comprising the following steps:
Step 401, audio, video data is acquired.
In the present embodiment, for handle the method for data executing subject (such as terminal device shown in FIG. 1 101, 102, image collecting device (such as camera) and audio signal sample device (such as microphone) 103) can be installed.It is above-mentioned Executing subject can open above-mentioned image collecting device and above-mentioned audio collecting device simultaneously, using above-mentioned image collecting device and Above-mentioned audio collecting device carries out the acquisition of audio, video data.Wherein, above-mentioned audio, video data includes audio data and video counts According to.Herein, above-mentioned audio data can be the data of pcm encoder format.
Step 402, the acquisition time of the first frame of video data is determined as to the initial time of video data.
In the present embodiment, above-mentioned executing subject can recorde acquisition time in each frame for collecting video data. The acquisition time of each frame can be the system timestamp (such as unix timestamp) when collecting the frame.Herein, above-mentioned execution The acquisition time of the first frame of above-mentioned video data can be determined as the initial time of video data by main body.In practice, it can incite somebody to action The initial time is considered as 0 moment of video data.
It should be noted that the basic phase of operation of the operation of step 401- step 402 and above-mentioned steps 201- step 202 Together, details are not described herein again.
Step 403, in response to determining that audio, video data is the data of piecewise acquisition, for each point of audio, video data Section, based on the data volume of the audio data in the segmentation, determines the duration of the segmentation.
In the present embodiment, in response to determining that audio, video data is the data of piecewise acquisition, for the every of audio, video data One segmentation, above-mentioned executing subject can determine the duration of the segmentation based on the data volume of the audio data in the segmentation.Specifically Ground, since audio data is the behaviour such as to be sampled, quantified to voice signal according to the sample frequency of setting, the sample size of setting It is obtained after work.Therefore, above-mentioned sample frequency, above-mentioned sample size can be multiplied, determines bit rate.Above-mentioned sound is regarded Each segmentation of frequency evidence, can determine the data volume (i.e. size) of the audio data in the segmentation first.Later, determining should The ratio of data volume and above-mentioned bit rate.The ratio is then the duration of the segmentation.
Step 404, for the frame of the video data in first section audio, video data, by the acquisition time and initial time of the frame Difference be determined as the timestamp of the frame.
In the present embodiment, for the frame of the video data in first section audio, video data, above-mentioned executing subject can should The acquisition time of frame and the difference of initial time are determined as the timestamp of the frame.
Step 405, for the frame of the video data in non-first section audio, video data, the audio, video data which is located at Segmentation determine the acquisition time of the frame using the first frame of the video data in target segment as target frame as target segment With the difference of the acquisition time of target frame, and, determine the pervious duration summation being respectively segmented of target segment, by duration summation with The sum of difference is determined as the timestamp of the frame.
In the present embodiment, for the frame of the video data in non-first section audio, video data, above-mentioned executing subject can be first The segmentation for the audio, video data that first frame is located at as target segment, using the first frame of the video data in target segment as Target frame.Then, the difference of the acquisition time of the frame and the acquisition time of target frame can be determined, and, determine target segment The pervious duration summation being respectively segmented.Finally, the sum of duration summation and difference can be determined as to the timestamp of the frame.
Video counts can be accurately determined out using above-mentioned implementation accordingly, for the audio, video data of continuous acquisition The timestamp of each frame in.Further, since audio data is according to the sample frequency of setting, the sample size of setting to sound Signal, which is sampled, quantified etc., to be obtained after operation, and therefore, the data volume of audio data collected per second is fixed.By This, the data volume (i.e. size) of audio data can be used for characterizing or calculating the timestamp of audio data.Due to can be accurate Ground determines the timestamp of audio data, video data, and therefore, above-mentioned implementation can make the primary sound video recorded realize sound view Frequency is synchronous.
Step 406, audio data and video data comprising timestamp are stored.
In the present embodiment, above-mentioned executing subject by above-mentioned audio data and can include the video data of timestamp first It is encoded respectively.Later, the video data after the audio data and coding after coding can be stored in same file.
Figure 4, it is seen that the method for handling data compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 highlight when audio, video data be piecewise acquisition data when, to the determination step of video time stamp.As a result, originally The scheme of embodiment description, for the audio, video data of continuous acquisition, in the video data that each segmentation can be accurately determined out Each frame timestamp.In addition, the data volume that audio data is can be used for characterizing or calculating the timestamp of audio data.By It can be read directly in the data volume of the timestamp that can accurately determine video data in each segmentation, and audio data, therefore, Each segmentation in recorded primary sound video can be made to realize audio-visual synchronization.Simultaneously as each point can be accurately determined out The timestamp of the first frame of section can make the whole primary sound video recorded after each segmentation audio, video data is merged into entirety Realize audio-visual synchronization.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for handling number According to device one embodiment, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, being used to handle the device 500 of data described in the present embodiment includes: acquisition unit 501, it is configured At acquisition audio, video data, above-mentioned audio, video data includes audio data and video data;First determination unit 502, is configured At the initial time that the acquisition time of the first frame of above-mentioned video data is determined as to video data;Second determination unit 503, is matched It is set to for the frame in above-mentioned video data, the acquisition time based on above-mentioned initial time He the frame determines the timestamp of the frame; Storage unit 504 is configured to store above-mentioned audio data and video data comprising timestamp.
In some optional implementations of the present embodiment, above-mentioned second determination unit 503 may include first determining Module (not shown).Wherein, above-mentioned first determining module may be configured to be in response to the above-mentioned audio, video data of determination The data of continuous acquisition, it is for the frame in above-mentioned video data, the acquisition time of the frame and the difference of above-mentioned initial time is true It is set to the timestamp of the frame.
In some optional implementations of the present embodiment, above-mentioned second determination unit 503 may include second determining Module and third determining module (not shown).Wherein, above-mentioned second determining module may be configured in response in determination The data that audio, video data is piecewise acquisition are stated, each segmentation for above-mentioned audio, video data, based on the sound in the segmentation The data volume of frequency evidence determines the duration of the segmentation.Above-mentioned third determining module be configured to above-mentioned initial time, The acquisition time of frame in the duration and video data of the segmentation of above-mentioned audio, video data determines the time of the frame in video data Stamp.
In some optional implementations of the present embodiment, above-mentioned third determining module may include the first determining submodule Block and second determines submodule (not shown).Wherein, above-mentioned first determine that submodule may be configured to for first section sound The difference of the acquisition time of the frame and above-mentioned initial time is determined as the time of the frame by the frame of the video data in video data Stamp.Above-mentioned second determines that submodule may be configured to the frame for the video data in non-first section audio, video data, by the frame The segmentation for the audio, video data being located at is as target segment, using the first frame of the video data in above-mentioned target segment as target Frame determines the difference of the acquisition time of the frame and the acquisition time of above-mentioned target frame, and, determine that above-mentioned target segment is pervious The sum of above-mentioned duration summation and above-mentioned difference is determined as the timestamp of the frame by the duration summation being respectively segmented.
In some optional implementations of the present embodiment, above-mentioned second determination unit 503 may include the 4th determining Module (not shown).Wherein, above-mentioned 4th determining module can be further configured to: in response to determining that audio-video is recorded During exist pause record period, for the frame of the video data in first section audio, video data, by the acquisition of the frame The difference of time and above-mentioned initial time is determined as the timestamp of the frame;For in the video data of the audio, video data of remaining section Frame, acquisition time, above-mentioned initial time based on the frame and suspended the duration recorded before the acquisition time of the frame, Determine the timestamp of the frame.
In some optional implementations of the present embodiment, said memory cells 504 may include coding module and deposit Store up module (not shown).Wherein, above-mentioned coding module may be configured to by above-mentioned audio data and comprising timestamp Video data is encoded respectively.Above-mentioned memory module may be configured to the video after audio data and coding after encoding Data are stored in same file.
The device provided by the above embodiment of the application, it is by the first determination unit 502 that acquisition unit 501 is collected The acquisition time of the first frame of video data in audio, video data is determined as the initial time of video data, then second determines list For the frame in video data, the acquisition time based on initial time He the frame determines the timestamp of the frame, finally deposits member 503 Storage unit 504 stores above-mentioned audio data and video data comprising timestamp, thus, avoid video data acquiring In the case where unstable (such as apparatus overheat, performance deficiency lead to frame losing), the timestamp of frame is carried out according to same time interval Calculating caused by timestamp inaccuracy problem, improve determined by frame in video data timestamp it is accurate Property.
Below with reference to Fig. 6, it illustrates the computer systems 600 for the terminal device for being suitable for being used to realize the embodiment of the present application Structural schematic diagram.Terminal device shown in Fig. 6 is only an example, function to the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
I/O interface 605 is connected to lower component: the importation 606 including touch screen, touch tablet etc.;Including such as liquid The output par, c 607 of crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;And including such as The communications portion 609 of the network interface card of LAN card, modem etc..Communications portion 609 is held via the network of such as internet Row communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as semiconductor memory Etc., it is mounted on driver 610, is deposited in order to be mounted into as needed from the computer program read thereon as needed Store up part 608.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination. The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include acquisition unit, the first determination unit, the second determination unit and storage unit.Wherein, the title of these units is in certain situation Under do not constitute restriction to the unit itself, for example, acquisition unit is also described as the " list of acquisition audio, video data Member ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: acquisition audio, video data, which includes audio data and video data;By adopting for the first frame of the video data The collection time is determined as the initial time of video data;For the frame in the video data, adopting based on the initial time and the frame Collect the time, determines the timestamp of the frame;The audio data and the video data comprising timestamp are stored.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (14)

1. a kind of method for handling data, comprising:
Audio, video data is acquired, the audio, video data includes audio data and video data;
The acquisition time of the first frame of the video data is determined as to the initial time of video data;
For the frame in the video data, the acquisition time based on the initial time He the frame determines the timestamp of the frame;
The audio data and the video data comprising timestamp are stored.
2. the method according to claim 1 for handling data, wherein the frame in the video data, Acquisition time based on the initial time He the frame, determines the timestamp of the frame, comprising:
It is the data of continuous acquisition in response to the determination audio, video data, for the frame in the video data, by the frame The difference of acquisition time and the initial time is determined as the timestamp of the frame.
3. the method according to claim 1 for handling data, wherein the frame in the video data, Acquisition time based on the initial time He the frame, determines the timestamp of the frame, comprising:
It is the data of piecewise acquisition in response to the determination audio, video data, each segmentation for the audio, video data, Based on the data volume of the audio data in the segmentation, the duration of the segmentation is determined;
Based on the acquisition time of the frame in the initial time, the duration and video data of the segmentation of the audio, video data, really Determine the timestamp of the frame in video data.
4. the method according to claim 3 for handling data, wherein described to be based on the initial time, the sound The acquisition time of frame in the duration and video data of the segmentation of video data determines the timestamp of the frame in video data, packet It includes:
It is for the frame of the video data in first section audio, video data, the acquisition time of the frame and the difference of the initial time is true It is set to the timestamp of the frame;
For the frame of the video data in non-first section audio, video data, the segmentation for the audio, video data which is located at is as mesh Mark segmentation determines the acquisition time and the mesh of the frame using the first frame of the video data in the target segment as target frame The difference of the acquisition time of frame is marked, and, determine the pervious duration summation being respectively segmented of the target segment, the duration is total It is determined as the timestamp of the frame with the sum with the difference.
5. the method according to claim 1 for handling data, wherein the frame in the video data, Acquisition time based on the initial time He the frame, determines the timestamp of the frame, comprising:
There is the period suspended and recorded during audio-video is recorded in response to determining, for the view in first section audio, video data The difference of the acquisition time of the frame and the initial time is determined as the timestamp of the frame by the frame of frequency evidence;For remaining section Audio, video data video data in frame, the acquisition time, the initial time based on the frame and the acquisition in the frame Suspend the duration recorded before time, determines the timestamp of the frame.
6. the method according to claim 1 for handling data, wherein described by the audio data and to include the time The video data of stamp is stored, comprising:
The audio data and the video data comprising timestamp are encoded respectively;
Video data after audio data and coding after coding is stored in same file.
7. a kind of for handling the device of data, comprising:
Acquisition unit, is configured to acquire audio, video data, and the audio, video data includes audio data and video data;
First determination unit, when being configured to the acquisition time of the first frame of the video data being determined as the starting of video data Between;
Second determination unit, is configured to for the frame in the video data, the acquisition based on the initial time He the frame Time determines the timestamp of the frame;
Storage unit is configured to store the audio data and video data comprising timestamp.
8. according to claim 7 for handling the device of data, wherein second determination unit, comprising:
First determining module is configured in response to determine the data that the audio, video data is continuous acquisition, for the view The difference of the acquisition time of the frame and the initial time, is determined as the timestamp of the frame by frame of the frequency in.
9. according to claim 7 for handling the device of data, wherein second determination unit, comprising:
Second determining module is configured in response to determine the data that the audio, video data is piecewise acquisition, for the sound Each segmentation of video data, based on the data volume of the audio data in the segmentation, determines the duration of the segmentation;
Third determining module is configured to the duration and video counts of the segmentation based on the initial time, the audio, video data The acquisition time of frame in determines the timestamp of the frame in video data.
10. according to claim 9 for handling the device of data, wherein the third determining module, comprising:
First determines submodule, is configured to the frame for the video data in first section audio, video data, when by the acquisition of the frame Between be determined as the timestamp of the frame with the difference of the initial time;
Second determines submodule, is configured to for the frame to be located at the frame of the video data in non-first section audio, video data Audio, video data segmentation as target segment, using the first frame of the video data in the target segment as target frame, really The difference of the acquisition time of the acquisition time and target frame of the fixed frame, and, determine pervious each point of the target segment The sum of the duration summation and the difference is determined as the timestamp of the frame by the duration summation of section.
11. according to claim 7 for handling the device of data, wherein second determination unit, comprising:
4th determining module is configured in response to determine there is pause recording time section during audio-video is recorded, for The difference of the acquisition time of the frame and the initial time is determined as the frame by the frame of the video data in first section audio, video data Timestamp;For the frame in the video data of the audio, video data of remaining section, when acquisition time based on the frame, the starting Between and suspended before the acquisition time of the frame duration recorded, determine the timestamp of the frame.
12. according to claim 7 for handling the device of data, wherein the storage unit, comprising:
Coding module is configured to respectively encode the audio data and video data comprising timestamp;
Memory module is configured to for the video data after the audio data and coding after coding being stored in same file.
13. a kind of terminal device, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Such as method as claimed in any one of claims 1 to 6.
CN201810864302.1A 2018-08-01 2018-08-01 Method and apparatus for handling data Pending CN109600649A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810864302.1A CN109600649A (en) 2018-08-01 2018-08-01 Method and apparatus for handling data
PCT/CN2019/098505 WO2020024960A1 (en) 2018-08-01 2019-07-31 Method and device for processing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810864302.1A CN109600649A (en) 2018-08-01 2018-08-01 Method and apparatus for handling data

Publications (1)

Publication Number Publication Date
CN109600649A true CN109600649A (en) 2019-04-09

Family

ID=65956268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810864302.1A Pending CN109600649A (en) 2018-08-01 2018-08-01 Method and apparatus for handling data

Country Status (2)

Country Link
CN (1) CN109600649A (en)
WO (1) WO2020024960A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024960A1 (en) * 2018-08-01 2020-02-06 北京微播视界科技有限公司 Method and device for processing data

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230421525A1 (en) * 2022-06-22 2023-12-28 Whatsapp Llc Facilitating pausing while recording audio and/or visual messages in social media messaging applications

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337883B1 (en) * 1998-06-10 2002-01-08 Nec Corporation Method and apparatus for synchronously reproducing audio data and video data
CN101945096A (en) * 2010-07-13 2011-01-12 上海未来宽带技术及应用工程研究中心有限公司 Video live broadcast system facing to set-top box and PC of mobile phone and working method thereof
CN102364952A (en) * 2011-10-25 2012-02-29 浙江万朋网络技术有限公司 Method for processing audio and video synchronization in simultaneous playing of a plurality of paths of audio and video
CN103237191A (en) * 2013-04-16 2013-08-07 成都飞视美视频技术有限公司 Method for synchronously pushing audios and videos in video conference
CN104053014A (en) * 2013-03-13 2014-09-17 腾讯科技(北京)有限公司 Live broadcast system and method based on mobile terminal, and mobile terminal
US20150093096A1 (en) * 2013-10-02 2015-04-02 Nokia Corporation Audio and video synchronization
CN105430537A (en) * 2015-11-27 2016-03-23 刘军 Method and server for synthesis of multiple paths of data, and music teaching system
CN106412662A (en) * 2016-09-20 2017-02-15 腾讯科技(深圳)有限公司 Timestamp distribution method and device
CN107018443A (en) * 2017-02-16 2017-08-04 乐蜜科技有限公司 Video recording method, device and electronic equipment
CN107566794A (en) * 2017-08-31 2018-01-09 深圳英飞拓科技股份有限公司 A kind of processing method of video data, system and terminal device
CN108073361A (en) * 2017-12-08 2018-05-25 佛山市章扬科技有限公司 A kind of method and device of automatic recording audio and video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600649A (en) * 2018-08-01 2019-04-09 北京微播视界科技有限公司 Method and apparatus for handling data

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337883B1 (en) * 1998-06-10 2002-01-08 Nec Corporation Method and apparatus for synchronously reproducing audio data and video data
CN101945096A (en) * 2010-07-13 2011-01-12 上海未来宽带技术及应用工程研究中心有限公司 Video live broadcast system facing to set-top box and PC of mobile phone and working method thereof
CN102364952A (en) * 2011-10-25 2012-02-29 浙江万朋网络技术有限公司 Method for processing audio and video synchronization in simultaneous playing of a plurality of paths of audio and video
CN104053014A (en) * 2013-03-13 2014-09-17 腾讯科技(北京)有限公司 Live broadcast system and method based on mobile terminal, and mobile terminal
CN103237191A (en) * 2013-04-16 2013-08-07 成都飞视美视频技术有限公司 Method for synchronously pushing audios and videos in video conference
US20150093096A1 (en) * 2013-10-02 2015-04-02 Nokia Corporation Audio and video synchronization
CN105430537A (en) * 2015-11-27 2016-03-23 刘军 Method and server for synthesis of multiple paths of data, and music teaching system
CN106412662A (en) * 2016-09-20 2017-02-15 腾讯科技(深圳)有限公司 Timestamp distribution method and device
CN107018443A (en) * 2017-02-16 2017-08-04 乐蜜科技有限公司 Video recording method, device and electronic equipment
CN107566794A (en) * 2017-08-31 2018-01-09 深圳英飞拓科技股份有限公司 A kind of processing method of video data, system and terminal device
CN108073361A (en) * 2017-12-08 2018-05-25 佛山市章扬科技有限公司 A kind of method and device of automatic recording audio and video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024960A1 (en) * 2018-08-01 2020-02-06 北京微播视界科技有限公司 Method and device for processing data

Also Published As

Publication number Publication date
WO2020024960A1 (en) 2020-02-06

Similar Documents

Publication Publication Date Title
CN109600564B (en) Method and apparatus for determining a timestamp
CN109600665A (en) Method and apparatus for handling data
RU2758081C1 (en) Phase-manipulated signal tone
CN108900776A (en) Method and apparatus for determining the response time
CN109660397A (en) For acquiring system, the method and apparatus of log
CN109600650B (en) Method and apparatus for processing data
CN110213614A (en) The method and apparatus of key frame are extracted from video file
CN110502665A (en) Method for processing video frequency and device
CN109600649A (en) Method and apparatus for handling data
CN108874946A (en) A kind of ID management method and device
US20200402543A1 (en) Video recording method and device
CN109600661B (en) Method and apparatus for recording video
CN104580183B (en) A kind of method of data synchronization and device across cluster
CN104539478A (en) Pressure test device and method for instant communication system
CN109600563B (en) Method and apparatus for determining a timestamp
CN109492039A (en) A kind of recording method of daily record data, device, electronic equipment and readable medium
CN109600660A (en) Method and apparatus for recorded video
CN110912948A (en) Method and device for reporting problems
CN108962226A (en) Method and apparatus for detecting the endpoint of voice
US11302308B2 (en) Synthetic narrowband data generation for narrowband automatic speech recognition systems
CN110472558A (en) Image processing method and device
CN109271543A (en) Display methods, device, terminal and the computer readable storage medium of thumbnail
CN109600562A (en) Method and apparatus for recorded video
CN113035246B (en) Audio data synchronous processing method and device, computer equipment and storage medium
CN111145769A (en) Audio processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190409